threads
listlengths
1
275
[ { "msg_contents": "I have a unloaded development server running 8.4b1 that is returning\nfrom a 'select * from pg_locks' in around 5 ms. While the time itself\nis not a big deal, I was curious and tested querying locks on a fairly\nbusy (200-500 tps sustained) running 8.2 on inferior hardware. This\nreturned (after an initial slower time) in well under 1 ms most of the\ntime. Is this noteworthy? What factors slow down best case\npg_lock_status() performance?\n\nedit: I bet it's the max_locks_per_transaction parameter. I really\ncranked it on the dev box during an experiment, to 16384.\ntesting...yup that's it. Are there any negative performance\nside-effects that could result from (perhaps overly) cranked\nmax_locks_per_transaction?\n\nmerlin\n", "msg_date": "Tue, 28 Apr 2009 13:53:45 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": true, "msg_subject": "pg_lock_status() performance" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> I have a unloaded development server running 8.4b1 that is returning\n> from a 'select * from pg_locks' in around 5 ms. While the time itself\n> is not a big deal, I was curious and tested querying locks on a fairly\n> busy (200-500 tps sustained) running 8.2 on inferior hardware. This\n> returned (after an initial slower time) in well under 1 ms most of the\n> time. Is this noteworthy? What factors slow down best case\n> pg_lock_status() performance?\n\n> edit: I bet it's the max_locks_per_transaction parameter. I really\n> cranked it on the dev box during an experiment, to 16384.\n> testing...yup that's it. Are there any negative performance\n> side-effects that could result from (perhaps overly) cranked\n> max_locks_per_transaction?\n\n[squint...] AFAICS the only *direct* cost component in pg_lock_status\nis the number of locks actually held or awaited. If there's a\nnoticeable component that depends on max_locks_per_transaction, it must\nbe from hash_seq_search() iterating over empty hash buckets. Which is\na mighty tight loop. What did you have max_connections set to?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Apr 2009 17:41:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_lock_status() performance " }, { "msg_contents": "On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane <[email protected]> wrote:\n> Merlin Moncure <[email protected]> writes:\n>> I have a unloaded development server running 8.4b1 that is returning\n>> from a 'select * from pg_locks' in around 5 ms.  While the time itself\n>> is not a big deal, I was curious and tested querying locks on a fairly\n>> busy (200-500 tps sustained)  running 8.2 on inferior hardware.  This\n>> returned (after an initial slower time) in well under 1 ms most of the\n>> time.  Is this noteworthy?  What factors slow down best case\n>> pg_lock_status() performance?\n>\n>> edit: I bet it's the max_locks_per_transaction parameter. I really\n>> cranked it on the dev box during an experiment, to 16384.\n>> testing...yup that's it.  Are there any negative performance\n>> side-effects that could result from (perhaps overly) cranked\n>> max_locks_per_transaction?\n>\n> [squint...]  AFAICS the only *direct* cost component in pg_lock_status\n> is the number of locks actually held or awaited.  If there's a\n> noticeable component that depends on max_locks_per_transaction, it must\n> be from hash_seq_search() iterating over empty hash buckets.  Which is\n> a mighty tight loop.  What did you have max_connections set to?\n\n16384 :D\n\n(I was playing with a function that created a large number of tables/schemas)\n\nmerlin\n", "msg_date": "Tue, 28 Apr 2009 17:42:16 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_lock_status() performance" }, { "msg_contents": "On Tue, Apr 28, 2009 at 5:42 PM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane <[email protected]> wrote:\n>> Merlin Moncure <[email protected]> writes:\n>>> I have a unloaded development server running 8.4b1 that is returning\n>>> from a 'select * from pg_locks' in around 5 ms.  While the time itself\n>>> is not a big deal, I was curious and tested querying locks on a fairly\n>>> busy (200-500 tps sustained)  running 8.2 on inferior hardware.  This\n>>> returned (after an initial slower time) in well under 1 ms most of the\n>>> time.  Is this noteworthy?  What factors slow down best case\n>>> pg_lock_status() performance?\n>>\n>>> edit: I bet it's the max_locks_per_transaction parameter. I really\n>>> cranked it on the dev box during an experiment, to 16384.\n>>> testing...yup that's it.  Are there any negative performance\n>>> side-effects that could result from (perhaps overly) cranked\n>>> max_locks_per_transaction?\n>>\n>> [squint...]  AFAICS the only *direct* cost component in pg_lock_status\n>> is the number of locks actually held or awaited.  If there's a\n>> noticeable component that depends on max_locks_per_transaction, it must\n>> be from hash_seq_search() iterating over empty hash buckets.  Which is\n>> a mighty tight loop.  What did you have max_connections set to?\n>\n> 16384 :D\n>\n> (I was playing with a function that created a large number of tables/schemas)\n\noops. misread that...the default 100.\n\nmerlin\n", "msg_date": "Tue, 28 Apr 2009 17:43:08 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_lock_status() performance" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n>> On Tue, Apr 28, 2009 at 5:41 PM, Tom Lane <[email protected]> wrote:\n>>> [squint...] �AFAICS the only *direct* cost component in pg_lock_status\n>>> is the number of locks actually held or awaited. �If there's a\n>>> noticeable component that depends on max_locks_per_transaction, it must\n>>> be from hash_seq_search() iterating over empty hash buckets. �Which is\n>>> a mighty tight loop. �What did you have max_connections set to?\n\n> oops. misread that...the default 100.\n\nHmm ... so we are talking about 1638400 vs 6400 hash buckets ... if that\nadds 4 msec to your query time then it's taking about 2.5 nsec per empty\nbucket, which I guess is not out of line for three lines of C code.\nSo that does seem to be the issue.\n\nWe've noticed before that hash_seq_search() can be a bottleneck for\nlarge lightly-populated hash tables. I wonder if there's a good way\nto reimplement it to avoid having to scan empty buckets? There are\nenough constraints on the hashtable implementation that I'm not sure\nwe can change it easily.\n\nAnyway, as regards your original question: I don't see any other\nnon-debug hash_seq_searches of LockMethodProcLockHash, so this\nparticular issue probably doesn't affect anything except pg_locks.\nNonetheless, holding lock on that table for four msec is not good, so\nyou could expect to see some performance glitches when you examine\npg_locks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Apr 2009 18:17:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_lock_status() performance " } ]
[ { "msg_contents": "\nHi, I'm hoping you guys can help with improving this query I'm having \na problem with. The main problem is that the query plan changes \ndepending on the value of the LIMIT clause, with small values using a \npoor plan and running very slowly. The two times are roughly 5 minutes \nfor the bad plan and 1.5 secs for the good plan.\n\nI have read a little about how the query planner takes into account \nthe limit clause, and I can see the effect this has on the costs shown \nby explain. The problem is that the estimated cost ends up being \nwildly inaccurate. I'm not sure if this a problem with the planner or \nif it is something I am doing wrong on my end.\n\nthe query (without the limit clause):\n\nSELECT ID FROM ps_image WHERE id IN (SELECT image_id FROM \nps_gallery_image WHERE gallery_id='G00007ejKGoWS_cY') ORDER BY \nLOWER(FILE_NAME) ASC\n\nThe ps_image table has about 24 million rows, ps_gallery_image has \nabout 14 million. The query above produces roughly 50 thousand rows.\n\nWhen looking at the explain with the limit, I can see the \ninterpolation that the planner does for the limit node (arriving at a \nfinal cost of 458.32 for this example) but not sure why it is \ninaccurate compared to the actual times.\n\nThanks in advance for taking a look at this, let me know if there is \nadditional information I should provide.\n\nSome information about the tables and the explains follow below.\n\nJames Nelson\n\n[james@db2 ~] psql --version\npsql (PostgreSQL) 8.3.5\ncontains support for command-line editing\n\nphotoshelter=# \\d ps_image\n Table \"public.ps_image\"\n Column | Type | Modifiers\n---------------+-------------------------- \n+-------------------------------------------\nid | character varying(16) | not null\nuser_id | character varying(16) |\nalbum_id | character varying(16) | not null\nparent_id | character varying(16) |\nfile_name | character varying(200) |\nfile_size | bigint |\n.... 20 rows snipped ....\nIndexes:\n \"ps_image_pkey\" PRIMARY KEY, btree (id)\n \"i_file_name_l\" btree (lower(file_name::text))\n.... indexes, fk constraints and triggers snipped ....\n\nphotoshelter=# \\d ps_gallery_image\n Table \"public.ps_gallery_image\"\n Column | Type | Modifiers\n---------------+--------------------------+------------------------\ngallery_id | character varying(16) | not null\nimage_id | character varying(16) | not null\ndisplay_order | integer | not null default 0\ncaption | character varying(2000) |\nctime | timestamp with time zone | not null default now()\nmtime | timestamp with time zone | not null default now()\nid | character varying(16) | not null\nIndexes:\n \"ps_gallery_image_pkey\" PRIMARY KEY, btree (id)\n \"gi_gallery_id\" btree (gallery_id)\n \"gi_image_id\" btree (image_id)\nForeign-key constraints:\n \"ps_gallery_image_gallery_id_fkey\" FOREIGN KEY (gallery_id) \nREFERENCES ps_gallery(id) ON DELETE CASCADE\n \"ps_gallery_image_image_id_fkey\" FOREIGN KEY (image_id) REFERENCES \nps_image(id) ON DELETE CASCADE\nTriggers:\n ps_image_gi_sync AFTER INSERT OR DELETE OR UPDATE ON \nps_gallery_image FOR EACH ROW EXECUTE PROCEDURE ps_image_sync()\n\n= \n= \n= \n= \n= \n= \n= \n= \n= \n= \n= \n= \n= \n========================================================================\nexplain analyze for bad plan\n\nphotoshelter=# explain analyze SELECT ID FROM ps_image WHERE id IN \n(SELECT image_id FROM ps_gallery_image WHERE \ngallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit 1;\n QUERY \n PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..458.32 rows=1 width=36) (actual \ntime=709831.847..709831.847 rows=1 loops=1)\n -> Nested Loop IN Join (cost=0.00..17700128.78 rows=38620 \nwidth=36) (actual time=709831.845..709831.845 rows=1 loops=1)\n -> Index Scan using i_file_name_l on ps_image \n(cost=0.00..1023863.22 rows=24460418 width=36) (actual \ntime=0.063..271167.293 rows=8876340 loops=1)\n -> Index Scan using gi_image_id on ps_gallery_image \n(cost=0.00..0.85 rows=1 width=17) (actual time=0.048..0.048 rows=0 \nloops=8876340)\n Index Cond: ((ps_gallery_image.image_id)::text = \n(ps_image.id)::text)\n Filter: ((ps_gallery_image.gallery_id)::text = \n'G00007ejKGoWS_cY'::text)\nTotal runtime: 709831.932 ms\n\n= \n= \n= \n= \n= \n= \n= \n= \n= \n= \n= \n= \n= \n========================================================================\nexplain analyze for good plan\n\nphotoshelter=# explain analyze SELECT ID FROM ps_image WHERE id IN \n(SELECT image_id FROM ps_gallery_image WHERE \ngallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit 600;\n QUERY \n PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=154650.99..154652.49 rows=600 width=36) (actual \ntime=1886.038..1886.404 rows=600 loops=1)\n -> Sort (cost=154650.99..154747.54 rows=38619 width=36) (actual \ntime=1886.038..1886.174 rows=600 loops=1)\n Sort Key: (lower((ps_image.file_name)::text))\n Sort Method: top-N heapsort Memory: 75kB\n -> Nested Loop (cost=42394.02..152675.86 rows=38619 \nwidth=36) (actual time=135.132..1838.491 rows=50237 loops=1)\n -> HashAggregate (cost=42394.02..42780.21 rows=38619 \nwidth=17) (actual time=135.079..172.563 rows=50237 loops=1)\n -> Index Scan using gi_gallery_id on \nps_gallery_image (cost=0.00..42271.79 rows=48891 width=17) (actual \ntime=0.063..97.539 rows=50237 loops=1)\n Index Cond: ((gallery_id)::text = \n'G00007ejKGoWS_cY'::text)\n -> Index Scan using ps_image_pkey on ps_image \n(cost=0.00..2.83 rows=1 width=36) (actual time=0.031..0.031 rows=1 \nloops=50237)\n Index Cond: ((ps_image.id)::text = \n(ps_gallery_image.image_id)::text)\nTotal runtime: 1886.950 ms\n\n\n\n\n\n\n", "msg_date": "Wed, 29 Apr 2009 13:51:34 -0400", "msg_from": "James Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "bad plan and LIMIT" }, { "msg_contents": "You could try changing the IN to an EXISTS, that may alter how the \noptimizer weighs the limit.\n\nSELECT ID FROM ps_image WHERE EXISTS (SELECT null FROM \nps_gallery_image WHERE gallery_id ='G00007ejKGoWS_cY' and image_id = \nps_image.id) ORDER BY LOWER(FILE_NAME) ASC\n\nOn 30/04/2009, at 3:51 AM, James Nelson wrote:\n\n>\n> Hi, I'm hoping you guys can help with improving this query I'm \n> having a problem with. The main problem is that the query plan \n> changes depending on the value of the LIMIT clause, with small \n> values using a poor plan and running very slowly. The two times are \n> roughly 5 minutes for the bad plan and 1.5 secs for the good plan.\n>\n> I have read a little about how the query planner takes into account \n> the limit clause, and I can see the effect this has on the costs \n> shown by explain. The problem is that the estimated cost ends up \n> being wildly inaccurate. I'm not sure if this a problem with the \n> planner or if it is something I am doing wrong on my end.\n>\n> the query (without the limit clause):\n>\n> SELECT ID FROM ps_image WHERE id IN (SELECT image_id FROM \n> ps_gallery_image WHERE gallery_id='G00007ejKGoWS_cY') ORDER BY \n> LOWER(FILE_NAME) ASC\n>\n> The ps_image table has about 24 million rows, ps_gallery_image has \n> about 14 million. The query above produces roughly 50 thousand rows.\n>\n> When looking at the explain with the limit, I can see the \n> interpolation that the planner does for the limit node (arriving at \n> a final cost of 458.32 for this example) but not sure why it is \n> inaccurate compared to the actual times.\n>\n> Thanks in advance for taking a look at this, let me know if there is \n> additional information I should provide.\n>\n> Some information about the tables and the explains follow below.\n>\n> James Nelson\n>\n> [james@db2 ~] psql --version\n> psql (PostgreSQL) 8.3.5\n> contains support for command-line editing\n>\n> photoshelter=# \\d ps_image\n> Table \"public.ps_image\"\n> Column | Type | Modifiers\n> ---------------+-------------------------- \n> +-------------------------------------------\n> id | character varying(16) | not null\n> user_id | character varying(16) |\n> album_id | character varying(16) | not null\n> parent_id | character varying(16) |\n> file_name | character varying(200) |\n> file_size | bigint |\n> .... 20 rows snipped ....\n> Indexes:\n> \"ps_image_pkey\" PRIMARY KEY, btree (id)\n> \"i_file_name_l\" btree (lower(file_name::text))\n> .... indexes, fk constraints and triggers snipped ....\n>\n> photoshelter=# \\d ps_gallery_image\n> Table \"public.ps_gallery_image\"\n> Column | Type | Modifiers\n> ---------------+--------------------------+------------------------\n> gallery_id | character varying(16) | not null\n> image_id | character varying(16) | not null\n> display_order | integer | not null default 0\n> caption | character varying(2000) |\n> ctime | timestamp with time zone | not null default now()\n> mtime | timestamp with time zone | not null default now()\n> id | character varying(16) | not null\n> Indexes:\n> \"ps_gallery_image_pkey\" PRIMARY KEY, btree (id)\n> \"gi_gallery_id\" btree (gallery_id)\n> \"gi_image_id\" btree (image_id)\n> Foreign-key constraints:\n> \"ps_gallery_image_gallery_id_fkey\" FOREIGN KEY (gallery_id) \n> REFERENCES ps_gallery(id) ON DELETE CASCADE\n> \"ps_gallery_image_image_id_fkey\" FOREIGN KEY (image_id) REFERENCES \n> ps_image(id) ON DELETE CASCADE\n> Triggers:\n> ps_image_gi_sync AFTER INSERT OR DELETE OR UPDATE ON \n> ps_gallery_image FOR EACH ROW EXECUTE PROCEDURE ps_image_sync()\n>\n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> ======================================================================\n> explain analyze for bad plan\n>\n> photoshelter=# explain analyze SELECT ID FROM ps_image WHERE id IN \n> (SELECT image_id FROM ps_gallery_image WHERE \n> gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit 1;\n> QUERY \n> PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..458.32 rows=1 width=36) (actual \n> time=709831.847..709831.847 rows=1 loops=1)\n> -> Nested Loop IN Join (cost=0.00..17700128.78 rows=38620 \n> width=36) (actual time=709831.845..709831.845 rows=1 loops=1)\n> -> Index Scan using i_file_name_l on ps_image \n> (cost=0.00..1023863.22 rows=24460418 width=36) (actual \n> time=0.063..271167.293 rows=8876340 loops=1)\n> -> Index Scan using gi_image_id on ps_gallery_image \n> (cost=0.00..0.85 rows=1 width=17) (actual time=0.048..0.048 rows=0 \n> loops=8876340)\n> Index Cond: ((ps_gallery_image.image_id)::text = \n> (ps_image.id)::text)\n> Filter: ((ps_gallery_image.gallery_id)::text = \n> 'G00007ejKGoWS_cY'::text)\n> Total runtime: 709831.932 ms\n>\n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> = \n> ======================================================================\n> explain analyze for good plan\n>\n> photoshelter=# explain analyze SELECT ID FROM ps_image WHERE id IN \n> (SELECT image_id FROM ps_gallery_image WHERE \n> gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit \n> 600;\n> QUERY \n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=154650.99..154652.49 rows=600 width=36) (actual \n> time=1886.038..1886.404 rows=600 loops=1)\n> -> Sort (cost=154650.99..154747.54 rows=38619 width=36) (actual \n> time=1886.038..1886.174 rows=600 loops=1)\n> Sort Key: (lower((ps_image.file_name)::text))\n> Sort Method: top-N heapsort Memory: 75kB\n> -> Nested Loop (cost=42394.02..152675.86 rows=38619 \n> width=36) (actual time=135.132..1838.491 rows=50237 loops=1)\n> -> HashAggregate (cost=42394.02..42780.21 rows=38619 \n> width=17) (actual time=135.079..172.563 rows=50237 loops=1)\n> -> Index Scan using gi_gallery_id on \n> ps_gallery_image (cost=0.00..42271.79 rows=48891 width=17) (actual \n> time=0.063..97.539 rows=50237 loops=1)\n> Index Cond: ((gallery_id)::text = \n> 'G00007ejKGoWS_cY'::text)\n> -> Index Scan using ps_image_pkey on ps_image \n> (cost=0.00..2.83 rows=1 width=36) (actual time=0.031..0.031 rows=1 \n> loops=50237)\n> Index Cond: ((ps_image.id)::text = \n> (ps_gallery_image.image_id)::text)\n> Total runtime: 1886.950 ms\n>\n>\n>\n>\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nYou could try changing the IN to an EXISTS, that may alter how the optimizer weighs the limit.SELECT ID FROM ps_image WHERE EXISTS (SELECT null FROM ps_gallery_image WHERE gallery_id ='G00007ejKGoWS_cY' and image_id = ps_image.id) ORDER BY LOWER(FILE_NAME) ASCOn 30/04/2009, at 3:51 AM, James Nelson wrote:Hi, I'm hoping you guys can help with improving this query I'm having a problem with. The main problem is that the query plan changes depending on the value of the LIMIT clause, with small values using a poor plan and running very slowly. The two times are roughly 5 minutes for the bad plan and 1.5 secs for the good plan.I have read a little about how the query planner takes into account the limit clause, and I can see the effect this has on the costs shown by explain. The problem is that the estimated cost ends up being wildly inaccurate. I'm not sure if this a problem with the planner or if it is something I am doing wrong on my end.the query (without the limit clause):SELECT ID FROM ps_image WHERE id IN (SELECT image_id FROM ps_gallery_image WHERE gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASCThe ps_image table has about 24 million rows, ps_gallery_image has about 14 million. The query above produces roughly 50 thousand rows.When looking at the explain with the limit, I can see the interpolation that the planner does for the limit node (arriving at a final cost of 458.32 for this example) but not sure why it is inaccurate compared to the actual times.Thanks in advance for taking a look at this, let me know if there is additional information I should provide.Some information about the tables  and the explains follow below.James Nelson[james@db2 ~] psql --versionpsql (PostgreSQL) 8.3.5contains support for command-line editingphotoshelter=# \\d ps_image                              Table \"public.ps_image\"   Column     |           Type           |                 Modifiers---------------+--------------------------+-------------------------------------------id            | character varying(16)    | not nulluser_id       | character varying(16)    |album_id      | character varying(16)    | not nullparent_id     | character varying(16)    |file_name     | character varying(200)   |file_size     | bigint                   |.... 20 rows snipped ....Indexes:   \"ps_image_pkey\" PRIMARY KEY, btree (id)   \"i_file_name_l\" btree (lower(file_name::text)).... indexes, fk constraints and triggers snipped ....photoshelter=# \\d ps_gallery_image                 Table \"public.ps_gallery_image\"   Column     |           Type           |       Modifiers---------------+--------------------------+------------------------gallery_id    | character varying(16)    | not nullimage_id      | character varying(16)    | not nulldisplay_order | integer                  | not null default 0caption       | character varying(2000)  |ctime         | timestamp with time zone | not null default now()mtime         | timestamp with time zone | not null default now()id            | character varying(16)    | not nullIndexes:   \"ps_gallery_image_pkey\" PRIMARY KEY, btree (id)   \"gi_gallery_id\" btree (gallery_id)   \"gi_image_id\" btree (image_id)Foreign-key constraints:   \"ps_gallery_image_gallery_id_fkey\" FOREIGN KEY (gallery_id) REFERENCES ps_gallery(id) ON DELETE CASCADE   \"ps_gallery_image_image_id_fkey\" FOREIGN KEY (image_id) REFERENCES ps_image(id) ON DELETE CASCADETriggers:   ps_image_gi_sync AFTER INSERT OR DELETE OR UPDATE ON ps_gallery_image FOR EACH ROW EXECUTE PROCEDURE ps_image_sync()=====================================================================================explain analyze for bad planphotoshelter=# explain  analyze SELECT ID FROM ps_image WHERE id IN (SELECT image_id FROM ps_gallery_image WHERE gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit 1;                                                                        QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------------Limit  (cost=0.00..458.32 rows=1 width=36) (actual time=709831.847..709831.847 rows=1 loops=1)  ->  Nested Loop IN Join  (cost=0.00..17700128.78 rows=38620 width=36) (actual time=709831.845..709831.845 rows=1 loops=1)        ->  Index Scan using i_file_name_l on ps_image  (cost=0.00..1023863.22 rows=24460418 width=36) (actual time=0.063..271167.293 rows=8876340 loops=1)        ->  Index Scan using gi_image_id on ps_gallery_image  (cost=0.00..0.85 rows=1 width=17) (actual time=0.048..0.048 rows=0 loops=8876340)              Index Cond: ((ps_gallery_image.image_id)::text = (ps_image.id)::text)              Filter: ((ps_gallery_image.gallery_id)::text = 'G00007ejKGoWS_cY'::text)Total runtime: 709831.932 ms=====================================================================================explain analyze for good planphotoshelter=# explain  analyze SELECT ID FROM ps_image WHERE id IN (SELECT image_id FROM ps_gallery_image WHERE gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit 600;                                                                             QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------Limit  (cost=154650.99..154652.49 rows=600 width=36) (actual time=1886.038..1886.404 rows=600 loops=1)  ->  Sort  (cost=154650.99..154747.54 rows=38619 width=36) (actual time=1886.038..1886.174 rows=600 loops=1)        Sort Key: (lower((ps_image.file_name)::text))        Sort Method:  top-N heapsort  Memory: 75kB        ->  Nested Loop  (cost=42394.02..152675.86 rows=38619 width=36) (actual time=135.132..1838.491 rows=50237 loops=1)              ->  HashAggregate  (cost=42394.02..42780.21 rows=38619 width=17) (actual time=135.079..172.563 rows=50237 loops=1)                    ->  Index Scan using gi_gallery_id on ps_gallery_image  (cost=0.00..42271.79 rows=48891 width=17) (actual time=0.063..97.539 rows=50237 loops=1)                          Index Cond: ((gallery_id)::text = 'G00007ejKGoWS_cY'::text)              ->  Index Scan using ps_image_pkey on ps_image  (cost=0.00..2.83 rows=1 width=36) (actual time=0.031..0.031 rows=1 loops=50237)                    Index Cond: ((ps_image.id)::text = (ps_gallery_image.image_id)::text)Total runtime: 1886.950 ms-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 1 May 2009 18:22:59 +1000", "msg_from": "Adam Ruth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan and LIMIT" }, { "msg_contents": "use join instead of where in();\n", "msg_date": "Fri, 1 May 2009 09:32:47 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan and LIMIT" }, { "msg_contents": "EXISTS won't help much either, postgresql is not too fast, when it\ncomes to that sort of approach.\njoin is always going to be fast, it is about time you learn joins and\nuse them ;)\n", "msg_date": "Fri, 1 May 2009 12:12:19 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan and LIMIT" }, { "msg_contents": "\nI had tried using exists but both the forms of the query (with limit \nand without) performed much worse.\n\n James\n\nOn May 1, 2009, at 4:22 AM, Adam Ruth wrote:\n\n> You could try changing the IN to an EXISTS, that may alter how the \n> optimizer weighs the limit.\n>\n>\n> SELECT ID FROM ps_image WHERE EXISTS (SELECT null FROM \n> ps_gallery_image WHERE gallery_id ='G00007ejKGoWS_cY' and image_id = \n> ps_image.id) ORDER BY LOWER(FILE_NAME) ASC\n>\n> On 30/04/2009, at 3:51 AM, James Nelson wrote:\n>\n>>\n>> Hi, I'm hoping you guys can help with improving this query I'm \n>> having a problem with. The main problem is that the query plan \n>> changes depending on the value of the LIMIT clause, with small \n>> values using a poor plan and running very slowly. The two times are \n>> roughly 5 minutes for the bad plan and 1.5 secs for the good plan.\n>>\n>> I have read a little about how the query planner takes into account \n>> the limit clause, and I can see the effect this has on the costs \n>> shown by explain. The problem is that the estimated cost ends up \n>> being wildly inaccurate. I'm not sure if this a problem with the \n>> planner or if it is something I am doing wrong on my end.\n>>\n>> the query (without the limit clause):\n>>\n>> SELECT ID FROM ps_image WHERE id IN (SELECT image_id FROM \n>> ps_gallery_image WHERE gallery_id='G00007ejKGoWS_cY') ORDER BY \n>> LOWER(FILE_NAME) ASC\n>>\n>> The ps_image table has about 24 million rows, ps_gallery_image has \n>> about 14 million. The query above produces roughly 50 thousand rows.\n>>\n>> When looking at the explain with the limit, I can see the \n>> interpolation that the planner does for the limit node (arriving at \n>> a final cost of 458.32 for this example) but not sure why it is \n>> inaccurate compared to the actual times.\n>>\n>> Thanks in advance for taking a look at this, let me know if there \n>> is additional information I should provide.\n>>\n>> Some information about the tables and the explains follow below.\n>>\n>> James Nelson\n>>\n>> [james@db2 ~] psql --version\n>> psql (PostgreSQL) 8.3.5\n>> contains support for command-line editing\n>>\n>> photoshelter=# \\d ps_image\n>> Table \"public.ps_image\"\n>> Column | Type | Modifiers\n>> ---------------+-------------------------- \n>> +-------------------------------------------\n>> id | character varying(16) | not null\n>> user_id | character varying(16) |\n>> album_id | character varying(16) | not null\n>> parent_id | character varying(16) |\n>> file_name | character varying(200) |\n>> file_size | bigint |\n>> .... 20 rows snipped ....\n>> Indexes:\n>> \"ps_image_pkey\" PRIMARY KEY, btree (id)\n>> \"i_file_name_l\" btree (lower(file_name::text))\n>> .... indexes, fk constraints and triggers snipped ....\n>>\n>> photoshelter=# \\d ps_gallery_image\n>> Table \"public.ps_gallery_image\"\n>> Column | Type | Modifiers\n>> ---------------+--------------------------+------------------------\n>> gallery_id | character varying(16) | not null\n>> image_id | character varying(16) | not null\n>> display_order | integer | not null default 0\n>> caption | character varying(2000) |\n>> ctime | timestamp with time zone | not null default now()\n>> mtime | timestamp with time zone | not null default now()\n>> id | character varying(16) | not null\n>> Indexes:\n>> \"ps_gallery_image_pkey\" PRIMARY KEY, btree (id)\n>> \"gi_gallery_id\" btree (gallery_id)\n>> \"gi_image_id\" btree (image_id)\n>> Foreign-key constraints:\n>> \"ps_gallery_image_gallery_id_fkey\" FOREIGN KEY (gallery_id) \n>> REFERENCES ps_gallery(id) ON DELETE CASCADE\n>> \"ps_gallery_image_image_id_fkey\" FOREIGN KEY (image_id) REFERENCES \n>> ps_image(id) ON DELETE CASCADE\n>> Triggers:\n>> ps_image_gi_sync AFTER INSERT OR DELETE OR UPDATE ON \n>> ps_gallery_image FOR EACH ROW EXECUTE PROCEDURE ps_image_sync()\n>>\n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> =====================================================================\n>> explain analyze for bad plan\n>>\n>> photoshelter=# explain analyze SELECT ID FROM ps_image WHERE id IN \n>> (SELECT image_id FROM ps_gallery_image WHERE \n>> gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit 1;\n>> QUERY \n>> PLAN\n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..458.32 rows=1 width=36) (actual \n>> time=709831.847..709831.847 rows=1 loops=1)\n>> -> Nested Loop IN Join (cost=0.00..17700128.78 rows=38620 \n>> width=36) (actual time=709831.845..709831.845 rows=1 loops=1)\n>> -> Index Scan using i_file_name_l on ps_image \n>> (cost=0.00..1023863.22 rows=24460418 width=36) (actual \n>> time=0.063..271167.293 rows=8876340 loops=1)\n>> -> Index Scan using gi_image_id on ps_gallery_image \n>> (cost=0.00..0.85 rows=1 width=17) (actual time=0.048..0.048 rows=0 \n>> loops=8876340)\n>> Index Cond: ((ps_gallery_image.image_id)::text = \n>> (ps_image.id)::text)\n>> Filter: ((ps_gallery_image.gallery_id)::text = \n>> 'G00007ejKGoWS_cY'::text)\n>> Total runtime: 709831.932 ms\n>>\n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> = \n>> =====================================================================\n>> explain analyze for good plan\n>>\n>> photoshelter=# explain analyze SELECT ID FROM ps_image WHERE id IN \n>> (SELECT image_id FROM ps_gallery_image WHERE \n>> gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit \n>> 600;\n>> QUERY \n>> PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=154650.99..154652.49 rows=600 width=36) (actual \n>> time=1886.038..1886.404 rows=600 loops=1)\n>> -> Sort (cost=154650.99..154747.54 rows=38619 width=36) (actual \n>> time=1886.038..1886.174 rows=600 loops=1)\n>> Sort Key: (lower((ps_image.file_name)::text))\n>> Sort Method: top-N heapsort Memory: 75kB\n>> -> Nested Loop (cost=42394.02..152675.86 rows=38619 \n>> width=36) (actual time=135.132..1838.491 rows=50237 loops=1)\n>> -> HashAggregate (cost=42394.02..42780.21 rows=38619 \n>> width=17) (actual time=135.079..172.563 rows=50237 loops=1)\n>> -> Index Scan using gi_gallery_id on \n>> ps_gallery_image (cost=0.00..42271.79 rows=48891 width=17) (actual \n>> time=0.063..97.539 rows=50237 loops=1)\n>> Index Cond: ((gallery_id)::text = \n>> 'G00007ejKGoWS_cY'::text)\n>> -> Index Scan using ps_image_pkey on ps_image \n>> (cost=0.00..2.83 rows=1 width=36) (actual time=0.031..0.031 rows=1 \n>> loops=50237)\n>> Index Cond: ((ps_image.id)::text = \n>> (ps_gallery_image.image_id)::text)\n>> Total runtime: 1886.950 ms\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> -- \n>> Sent via pgsql-performance mailing list ([email protected] \n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n", "msg_date": "Fri, 1 May 2009 10:55:26 -0400", "msg_from": "James Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad plan and LIMIT" }, { "msg_contents": "James Nelson <[email protected]> writes:\n> Hi, I'm hoping you guys can help with improving this query I'm having \n> a problem with. The main problem is that the query plan changes \n> depending on the value of the LIMIT clause, with small values using a \n> poor plan and running very slowly. The two times are roughly 5 minutes \n> for the bad plan and 1.5 secs for the good plan.\n\n> photoshelter=# explain analyze SELECT ID FROM ps_image WHERE id IN \n> (SELECT image_id FROM ps_gallery_image WHERE \n> gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit 1;\n\nThe problem here is an overoptimistic assessment of how long it will\ntake to find a match to gallery_id='G00007ejKGoWS_cY' while searching\nin file_name order. You might be able to fix that by increasing the\nstatistics target for gallery_id. However, if the issue is not so\nmuch how many occurrences of 'G00007ejKGoWS_cY' there are as that\nthey're all associated with high values of file_name, that won't\nhelp. In that case I think it would work to restructure the query\nalong the lines of\n\nselect * from (\n SELECT ID FROM ps_image WHERE id IN \n (SELECT image_id FROM ps_gallery_image WHERE \n gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC\n offset 0\n ) ss\nlimit 1;\n\nThe OFFSET should act as an optimization fence to prevent the LIMIT\nfrom being used in the planning of the subquery.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 May 2009 10:57:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan and LIMIT " }, { "msg_contents": "\nThe 'in' form and 'join' form produce identical plans for both limit \nand non-limit versions of the query, which I actually think reflects \nwell on the query planner. I also tried a form of the query with the \nsubselect in the from clause to try and force the order the tables \nwere evaluated but the query planner saw through that one too. \nBasically this query:\n\nSELECT ps_image.id FROM\n\t(SELECT image_id FROM ps_gallery_image WHERE \ngallery_id='G00007ejKGoWS_cY') as ids\nINNER JOIN ps_image on ps_image.id = ids.image_id ORDER BY \nLOWER(FILE_NAME) ASC limit 1;\n\nproduces the same plan as the 'in' or the 'join' form when the limit \nclause is present.\n\n James\n\n\n\nOn May 1, 2009, at 4:32 AM, Grzegorz Jaśkiewicz wrote:\n\n> use join instead of where in();\n\n", "msg_date": "Fri, 1 May 2009 11:07:50 -0400", "msg_from": "James Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad plan and LIMIT" }, { "msg_contents": "\nI looked into the distribution of the filenames, in particular I ran a \nquery to see how for into the table the 1st filename would be found.\n\nphotoshelter=# select count(*) from ps_image where lower(file_name) < \n'a-400-001.jpg';\n count\n---------\n 8915832\n\n\nAs you can see the first row is almost 9 million rows into the table. \n(a-400-001.jpg is the first filename returned by the query) which \nimplies the distribution is heavily non-uniform. (For uniform \ndistribution the first row should have been within the first 500 rows, \ngive or take)\n\nI tried the query you suggest below but it did not work well, but \nusing it as inspiration the following query does work:\n\nphotoshelter=# explain analyze select * from (\n SELECT ID, lower(file_name) as lfn FROM ps_image WHERE id IN\n (SELECT image_id FROM ps_gallery_image WHERE\n gallery_id='G00007ejKGoWS_cY')\n offset 0\n ) ss\nORDER BY lfn ASC\nlimit 1;\n QUERY \n PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=158946.43..158946.43 rows=1 width=52) (actual \ntime=1539.615..1539.615 rows=1 loops=1)\n -> Sort (cost=158946.43..159044.80 rows=39350 width=52) (actual \ntime=1539.613..1539.613 rows=1 loops=1)\n Sort Key: (lower((ps_image.file_name)::text))\n Sort Method: top-N heapsort Memory: 17kB\n -> Limit (cost=43197.34..158356.18 rows=39350 width=36) \n(actual time=74.530..1499.328 rows=50237 loops=1)\n -> Nested Loop (cost=43197.34..158356.18 rows=39350 \nwidth=36) (actual time=74.529..1475.378 rows=50237 loops=1)\n -> HashAggregate (cost=43197.34..43590.84 \nrows=39350 width=17) (actual time=74.468..110.638 rows=50237 loops=1)\n -> Index Scan using gi_gallery_id on \nps_gallery_image (cost=0.00..43072.80 rows=49816 width=17) (actual \ntime=0.049..46.926 rows=50237 loops=1)\n Index Cond: ((gallery_id)::text = \n'G00007ejKGoWS_cY'::text)\n -> Index Scan using ps_image_pkey on ps_image \n(cost=0.00..2.90 rows=1 width=36) (actual time=0.025..0.025 rows=1 \nloops=50237)\n Index Cond: ((ps_image.id)::text = \n(ps_gallery_image.image_id)::text)\n Total runtime: 1540.032 ms\n(12 rows)\n\nInterestingly to me, while the 'offest 0' did not work as an \noptimization fence in the query you provided, it works as one in the \nquery above. I had tried removing it from the above query, and the \nplan reverted back to the bad form.\n\nThe non-uniform distribution leads me to another question, would it be \npossible to use partial indexes or some other technique to help the \nplanner. Or would the fact that the relevant information, gallery ids \nand filenames, are split across two tables foil any attempt?\n\nIn any case, I'd like to thank everyone for their input. The query \nabove will be a big help.\n\nbe well,\n\n James\n\n\nOn May 1, 2009, at 10:57 AM, Tom Lane wrote:\n\n> James Nelson <[email protected]> writes:\n>> Hi, I'm hoping you guys can help with improving this query I'm having\n>> a problem with. The main problem is that the query plan changes\n>> depending on the value of the LIMIT clause, with small values using a\n>> poor plan and running very slowly. The two times are roughly 5 \n>> minutes\n>> for the bad plan and 1.5 secs for the good plan.\n>\n>> photoshelter=# explain analyze SELECT ID FROM ps_image WHERE id IN\n>> (SELECT image_id FROM ps_gallery_image WHERE\n>> gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC limit 1;\n>\n> The problem here is an overoptimistic assessment of how long it will\n> take to find a match to gallery_id='G00007ejKGoWS_cY' while searching\n> in file_name order. You might be able to fix that by increasing the\n> statistics target for gallery_id. However, if the issue is not so\n> much how many occurrences of 'G00007ejKGoWS_cY' there are as that\n> they're all associated with high values of file_name, that won't\n> help. In that case I think it would work to restructure the query\n> along the lines of\n>\n> select * from (\n> SELECT ID FROM ps_image WHERE id IN\n> (SELECT image_id FROM ps_gallery_image WHERE\n> gallery_id='G00007ejKGoWS_cY') ORDER BY LOWER(FILE_NAME) ASC\n> offset 0\n> ) ss\n> limit 1;\n>\n> The OFFSET should act as an optimization fence to prevent the LIMIT\n> from being used in the planning of the subquery.\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Fri, 1 May 2009 11:42:46 -0400", "msg_from": "James Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad plan and LIMIT " } ]
[ { "msg_contents": "Hello,\n\nI want to use postgresql for data entries (every minute) from a central heating \nsystem where the timestamp is logged in a table log. For flexibility in the \nfuture for future values and for implementing several high level types I've \nmodelled the values in a separate key/value table called log_details.\n\nA Query for the last valid entry for today looks like (also defined as a view), \nsometimes used without the limit:\nSELECT\n l.id AS id,\n l.datetime AS datetime,\n l.tdate AS tdate,\n l.ttime AS ttime,\n d1.value AS Raumsolltemperatur,\n d2.value AS Raumtemperatur,\n-- a lot more here, stripped for readibility, see link\nFROM\n log l\n-- Order is relevant here\nLEFT OUTER JOIN key_description k1 ON k1.description = 'Raumsolltemperatur'\nLEFT OUTER JOIN log_details d1 ON l.id = d1.fk_id AND d1.fk_keyid = \nk1.keyid\n-- Order is relevant here\nLEFT OUTER JOIN key_description k2 ON k2.description = 'Raumtemperatur'\nLEFT OUTER JOIN log_details d2 ON l.id = d2.fk_id AND d2.fk_keyid = \nk2.keyid\n-- a lot more here, stripped for readibility, see link\nWHERE\n -- 86400 entries in that timeframe\n datetime >= '1970-01-01 00:00:00+02'\n AND datetime < '1970-01-02 00:00:00+02'\nORDER BY\n datetime DESC\nLIMIT 1;\n\nFor me a perfect query plan would look like:\n1.) Fetch the one and only id from table log (or fetch even all necessary id \nentries when no limit is specifie)\n2.) Make the left outer joins\n\nDetails (machine details, table definition, query plans, etc.) \ncan be found to due size limitations at:\nhttp://www.wiesinger.com/tmp/pg_perf.txt\n\nAny ideas how to improve the performance on left outer joins only and how to \nimprove the planner to get better results?\n\nFor this special case a better solution exists but I thing the planner has to \ndo the work.\n-- ...\nWHERE\n -- Also slow: id IN\n -- OK: id =\n id = (\n SELECT\n id\n FROM\n log\n WHERE\n datetime >= '1970-01-01 00:00:00+02'\n AND datetime < '1970-01-02 00:00:00+02'\n ORDER BY\n datetime DESC\n LIMIT 1\n )\nORDER BY\n datetime DESC LIMIT 1;\n\nAny ideas?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n", "msg_date": "Fri, 1 May 2009 15:55:49 +0200 (CEST)", "msg_from": "Gerhard Wiesinger <[email protected]>", "msg_from_op": true, "msg_subject": "Many left outer joins with limit performance" }, { "msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> FROM\n> log l\n> -- Order is relevant here\n> LEFT OUTER JOIN key_description k1 ON k1.description = 'Raumsolltemperatur'\n> LEFT OUTER JOIN log_details d1 ON l.id = d1.fk_id AND d1.fk_keyid = k1.keyid\n\nSurely this query is just plain broken? You're forming a cross product\nof the relevant log lines with the k1 rows having description =\n'Raumsolltemperatur' (I assume this isn't unique, else it's not clear\nwhat the point is) and then the subsequent left join cannot get rid of\nanything. I think probably you meant something different, like\n\nFROM\n log l\nLEFT OUTER JOIN log_details d1 ON l.id = d1.fk_id\nLEFT OUTER JOIN key_description k1 ON k1.description = 'Raumsolltemperatur' AND d1.fk_keyid = k1.keyid\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 May 2009 12:14:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many left outer joins with limit performance " }, { "msg_contents": "Hello Tom,\n\nThe query was logically ok. The main problem was that the VIEW had an \nORDER BY clause where cost went up to very high. Indices and unique \nconstraints were minor optimizations.\n\nConclusio: Don't create ORDER BY in VIEW unless really necessary\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n\n\nOn Fri, 1 May 2009, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> FROM\n>> log l\n>> -- Order is relevant here\n>> LEFT OUTER JOIN key_description k1 ON k1.description = 'Raumsolltemperatur'\n>> LEFT OUTER JOIN log_details d1 ON l.id = d1.fk_id AND d1.fk_keyid = k1.keyid\n>\n> Surely this query is just plain broken? You're forming a cross product\n> of the relevant log lines with the k1 rows having description =\n> 'Raumsolltemperatur' (I assume this isn't unique, else it's not clear\n> what the point is) and then the subsequent left join cannot get rid of\n> anything. I think probably you meant something different, like\n>\n> FROM\n> log l\n> LEFT OUTER JOIN log_details d1 ON l.id = d1.fk_id\n> LEFT OUTER JOIN key_description k1 ON k1.description = 'Raumsolltemperatur' AND d1.fk_keyid = k1.keyid\n>\n> \t\t\tregards, tom lane\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sun, 27 Sep 2009 09:09:26 +0200 (CEST)", "msg_from": "Gerhard Wiesinger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Many left outer joins with limit performance" } ]
[ { "msg_contents": "Hi,\nI was looking at the support that PostgreSQL offers for table partitioning at http://www.postgresql.org/docs/8.4/static/ddl-partitioning.html. The concept looks promising, but its maybe fair to say that PG itself doesn't really supports partitioning natively, but one can simulate it using some of the existing PG features (namely inheritance, triggers, rules and constraint exclusion). This simulating does seem to work, but there are some disadvantages and caveats. \nA major disadvantage is obviously that you need to set up and maintain the whole structure yourself (which is somewhat dangerous, or at least involves a lot of maintenance overhead). Next to that, it seemingly becomes hard to do simple queries likes 'select * from foo where bar> 1000 and bar < 5000', in case the answer to this query spans multiple partitions. constraint exclusion works to some degree, but the document I referred to above tells me I can no longer use prepared statements then.\nI wonder if there are any plans to incorporate 'native' or 'transparent' partitioning in some future version of PG? With this I mean that I would basically be able to say something like (pseudo): \"alter table foo partition on bar range 100\", and PG would then simply start doing internally what we now have to do manually.\nIs something like this on the radar or is it just wishful thinking of me?\nKind regards\n\n\n_________________________________________________________________\nWhat can you do with the new Windows Live? Find out\nhttp://www.microsoft.com/windows/windowslive/default.aspx\n\n\n\n\n\nHi,I was looking at the support that PostgreSQL offers for table partitioning at http://www.postgresql.org/docs/8.4/static/ddl-partitioning.html. The concept looks promising, but its maybe fair to say that PG itself doesn't really supports partitioning natively, but one can simulate it using some of the existing PG features (namely inheritance, triggers, rules and constraint exclusion). This simulating does seem to work, but there are some disadvantages and caveats. A major disadvantage is obviously that you need to set up and maintain the whole structure yourself (which is somewhat dangerous, or at least involves a lot of maintenance overhead). Next to that, it seemingly becomes hard to do simple queries likes 'select * from foo where bar> 1000 and bar < 5000', in case the answer to this query spans multiple partitions. constraint exclusion works to some degree, but the document I referred to above tells me I can no longer use prepared statements then.I wonder if there are any plans to incorporate 'native' or 'transparent' partitioning in some future version of PG? With this I mean that I would basically be able to say something like (pseudo): \"alter table foo partition on bar range 100\", and PG would then simply start doing internally what we now have to do manually.Is something like this on the radar or is it just wishful thinking of me?Kind regardsWhat can you do with the new Windows Live? Find out", "msg_date": "Fri, 1 May 2009 16:32:19 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Transparent table partitioning in future version of PG?" }, { "msg_contents": "On Fri, May 1, 2009 at 10:32 AM, henk de wit <[email protected]> wrote:\n> I was looking at the support that PostgreSQL offers for table partitioning\n> at http://www.postgresql.org/docs/8.4/static/ddl-partitioning.html. The\n> concept looks promising, but its maybe fair to say that PG itself doesn't\n> really supports partitioning natively, but one can simulate it using some of\n> the existing PG features (namely inheritance, triggers, rules and constraint\n> exclusion). This simulating does seem to work, but there are some\n> disadvantages and caveats.\n> A major disadvantage is obviously that you need to set up and maintain the\n> whole structure yourself (which is somewhat dangerous, or at least involves\n> a lot of maintenance overhead). Next to that, it seemingly becomes hard to\n> do simple queries likes 'select * from foo where bar> 1000 and bar < 5000',\n> in case the answer to this query spans multiple partitions. constraint\n> exclusion works to some degree, but the document I referred to above tells\n> me I can no longer use prepared statements then.\n> I wonder if there are any plans to incorporate 'native' or 'transparent'\n> partitioning in some future version of PG? With this I mean that I would\n> basically be able to say something like (pseudo): \"alter table foo partition\n> on bar range 100\", and PG would then simply start doing internally what we\n> now have to do manually.\n> Is something like this on the radar or is it just wishful thinking of me?\n> Kind regards\n\nThis has been discussed on this list multiple times previously; search\nthe archives.\n\nThe problem has been finding someone who has both the time and the\nability to do the work.\n\n...Robert\n", "msg_date": "Fri, 1 May 2009 11:27:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of PG?" }, { "msg_contents": "On 5/1/09 7:32 AM, \"henk de wit\" <[email protected]> wrote:\n\n> Hi,\n> \n> I was looking at the support that PostgreSQL offers for table partitioning\n> at http://www.postgresql.org/docs/8.4/static/ddl-partitioning.html. The\n> concept looks promising, but its maybe fair to say that PG itself doesn't\n> really supports partitioning natively, but one can simulate it using some of\n> the existing PG features (namely inheritance, triggers, rules and constraint\n> exclusion). This simulating does seem to work, but there are some\n> disadvantages and caveats. \n> \n> A major disadvantage is obviously that you need to set up and maintain the\n> whole structure yourself (which is somewhat dangerous, or at least involves a\n> lot of maintenance overhead). Next to that, it seemingly becomes hard to do\n> simple queries likes 'select * from foo where bar> 1000 and bar < 5000', in\n> case the answer to this query spans multiple partitions. constraint exclusion\n> works to some degree, but the document I referred to above tells me I can no\n> longer use prepared statements then.\n\nMore caveats: \n\nQuery plans go bad pretty quickly because the planner doesn't aggregate\nstatistics correctly when scanning more than one table.\n\nConstraint exclusion code is completely and utterly broken if the table\ncount gets large on DELETE or UPDATE queries -- I can get the query planner\n/ constraint exclusion stuff to eat up 7GB of RAM trying to figure out what\ntable to access when the number of partitions ~=6000.\nThe same thing in select form doesn't consume that memory but still takes\nover a second. \n\nThis is \"not a bug\".\nhttp://www.nabble.com/8.3.5:-Query-Planner-takes-15%2B-seconds-to-plan-Updat\ne-or-Delete-queries-on-partitioned-tables.-td21992054.html\n\nIts pretty much faster to do merge joins or hash joins client side on\nmultiple tables -- basically doing partitioning client side -- after a point\nand for any more complicated aggregation or join.\n\nThere is a lot of talk about overly complicated partitioning or\nauto-partitioning, but two much more simple things would go a long way to\nmaking this fairly workable:\n\nMake stat aggregation across tables better -- use weighted average for\nestimating row width, aggregate distinct counts and correlations better.\nRight now it mostly assumes the worst possible case and can end up with very\nunoptimal plans.\n\nMake a special case for \"unique\" child inheritance constraints that can be\nchecked much faster -- nobody wants to partition and have overlapping\nconstraint regions. And whatever is going on for it on the update / delete\nside that causes it to take so much longer and use so much more memory for\nwhat should be the same constraint exclusion check as a select needs to be\nattended to.\n\nThere would still be manual work for managing creating partitions, but at\nthis point, that is the _least_ of the problems.\n\n\n> \n> I wonder if there are any plans to incorporate 'native' or 'transparent'\n> partitioning in some future version of PG? With this I mean that I would\n> basically be able to say something like (pseudo): \"alter table foo partition\n> on bar range 100\", and PG would then simply start doing internally what we now\n> have to do manually.\n> \n> Is something like this on the radar or is it just wishful thinking of me?\n> \n> Kind regards\n> \n> \n> \n> \n> What can you do with the new Windows Live? Find out\n> <http://www.microsoft.com/windows/windowslive/default.aspx> \n\n", "msg_date": "Fri, 1 May 2009 10:14:57 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n PG?" }, { "msg_contents": "\nOn Fri, 2009-05-01 at 11:27 -0400, Robert Haas wrote:\n\n> The problem has been finding someone who has both the time and the\n> ability to do the work.\n\nUnfortunately there has been significant debate over which parts of\npartitioning need to be improved. My own view is that considerable\nattention needs to be applied to both the executor and planner to\nimprove matters and that syntax improvements are largely irrelevant,\nthough seductive.\n\nDeep improvements will require significant analysis, agreement, effort\nand skill. What we have now took approximately 20 days to implement,\nwith later patches adding about another 10-20 days work. I'd estimate\nthe required work as 60-100 days work from primary author, plus planning\nand discussion time. YMMV.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 06 May 2009 22:34:15 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n\tPG?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Fri, 2009-05-01 at 11:27 -0400, Robert Haas wrote:\n>> The problem has been finding someone who has both the time and the\n>> ability to do the work.\n\n> Unfortunately there has been significant debate over which parts of\n> partitioning need to be improved. My own view is that considerable\n> attention needs to be applied to both the executor and planner to\n> improve matters and that syntax improvements are largely irrelevant,\n> though seductive.\n\nMy thought about it is that what we really need is an explicit notion\nof partitioned tables built into the system, instead of trying to make\nthe planner re-deduce the partitioning behavior from first principles\nevery time it builds a plan for such a table. Such a notion would\npresumably involve some new syntax to allow the partitioning rule to be\nspecified at table creation time. I agree that the syntax details are a\nminor issue, but the set of possible partitioning rules is certainly a\ntopic of great interest.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 May 2009 17:55:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of PG? " }, { "msg_contents": "\nOn Wed, 2009-05-06 at 17:55 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Fri, 2009-05-01 at 11:27 -0400, Robert Haas wrote:\n> >> The problem has been finding someone who has both the time and the\n> >> ability to do the work.\n> \n> > Unfortunately there has been significant debate over which parts of\n> > partitioning need to be improved. My own view is that considerable\n> > attention needs to be applied to both the executor and planner to\n> > improve matters and that syntax improvements are largely irrelevant,\n> > though seductive.\n> \n> My thought about it is that what we really need is an explicit notion\n> of partitioned tables built into the system, instead of trying to make\n> the planner re-deduce the partitioning behavior from first principles\n> every time it builds a plan for such a table. Such a notion would\n> presumably involve some new syntax to allow the partitioning rule to be\n> specified at table creation time. I agree that the syntax details are a\n> minor issue, but the set of possible partitioning rules is certainly a\n> topic of great interest.\n\nAgreed. Perhaps I should say then that the syntax needs to express the\nrequirements of the planner/executor behaviour, rather than being the\nmain aspect of the feature, as some have suggested.\n\nHopefully, notions of partitioning won't be directly tied to chunking of\ndata for parallel query access. Most queries access recent data and\nhence only a single partition (or stripe), so partitioning and\nparallelism and frequently exactly orthogonal. \n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 06 May 2009 23:08:00 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n\tPG?" }, { "msg_contents": "Simon Riggs escribi�:\n\n\n> Hopefully, notions of partitioning won't be directly tied to chunking of\n> data for parallel query access. Most queries access recent data and\n> hence only a single partition (or stripe), so partitioning and\n> parallelism and frequently exactly orthogonal. \n\nI think there should be a way to refer to individual partitions as\nobjects. That way we could execute some commands to enable certain\noptimizations, for example \"mark this partition read only\" which would\nmean it could be marked as not needing vacuum.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 6 May 2009 22:16:14 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n\tPG?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I think there should be a way to refer to individual partitions as\n> objects.\n\nYeah, the individual partitions should be nameable tables, otherwise we\nwill be reinventing a *whole* lot of management stuff to little gain.\nI don't actually think there is anything wrong with using table\ninheritance as the basic infrastructure --- I just want more smarts\nabout one particular use pattern of inheritance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 May 2009 22:27:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of PG? " }, { "msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> I think there should be a way to refer to individual partitions as\n>> objects.\n> \n> Yeah, the individual partitions should be nameable tables, otherwise we\n> will be reinventing a *whole* lot of management stuff to little gain.\n> I don't actually think there is anything wrong with using table\n> inheritance as the basic infrastructure --- I just want more smarts\n> about one particular use pattern of inheritance.\n\nMaybe it's worth examining and documenting existing partition setups,\nthe reasoning behind them, and how they're implemented, in order to\nguide any future plans for native partitioning support?\n\nMaybe that's already been/being done. On the off chance that it's not:\n\nOnes I can think of:\n\n- Partitioning an equally active dataset by ranges over a key to improve\n scan performance, INSERT/UPDATE costs on indexes, locking issues, etc.\n\n- The \"classic\" active/archive partition scheme where there's only one\npartition growing at any one time, and the others are historical data\nthat's nowhere near as \"hot\".\n\n- A variant on the basic active/archive structure, where query activity\ndecreases slowly over time and there are many partitions of recent data.\nPartitions are merged into larger ones as they age, somewhat like a RRD\ndatabase.\n\nI also expect that in the future there will be demand for striping data\nacross multiple partitions in different tablespaces to exploit\nin-parallel scanning (when/if supported) for better I/O utilization in\nmultiple-disk-array situations. For example, partitioning on\n\"MOD(id,10)\" across 10 separate volumes, and firing off 10 concurrent\nscans, one per partition, to satisfy a query.\n\nThose are some simpler schemes. Does anyone actively using partitioning\nhave particular schemes/details that're worth going into?\n\n--\nCraig Ringer\n", "msg_date": "Thu, 07 May 2009 10:56:24 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n PG?" }, { "msg_contents": "\nOn Thu, 2009-05-07 at 10:56 +0800, Craig Ringer wrote:\n> Tom Lane wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> >> I think there should be a way to refer to individual partitions as\n> >> objects.\n> > \n> > Yeah, the individual partitions should be nameable tables, otherwise we\n> > will be reinventing a *whole* lot of management stuff to little gain.\n> > I don't actually think there is anything wrong with using table\n> > inheritance as the basic infrastructure --- I just want more smarts\n> > about one particular use pattern of inheritance.\n> \n> Maybe it's worth examining and documenting existing partition setups,\n> the reasoning behind them, and how they're implemented, in order to\n> guide any future plans for native partitioning support?\n> \n> Maybe that's already been/being done. On the off chance that it's not:\n> \n> Ones I can think of:\n> \n> - Partitioning an equally active dataset by ranges over a key to improve\n> scan performance, INSERT/UPDATE costs on indexes, locking issues, etc.\n> \n> - The \"classic\" active/archive partition scheme where there's only one\n> partition growing at any one time, and the others are historical data\n> that's nowhere near as \"hot\".\n> \n> - A variant on the basic active/archive structure, where query activity\n> decreases slowly over time and there are many partitions of recent data.\n> Partitions are merged into larger ones as they age, somewhat like a RRD\n> database.\n> \n> I also expect that in the future there will be demand for striping data\n> across multiple partitions in different tablespaces to exploit\n> in-parallel scanning (when/if supported) for better I/O utilization in\n> multiple-disk-array situations. For example, partitioning on\n> \"MOD(id,10)\" across 10 separate volumes, and firing off 10 concurrent\n> scans, one per partition, to satisfy a query.\n\nThat's a good summary. It has already been documented and discussed, but\nsaying it again and again is the best way to get this across.\n\nYou've highlighted that partitioning is a feature with many underlying\nrequirements: infrequent access to data (frequently historical),\nstriping for parallelism and getting around RDBMS flaws (if any). We\nmust be careful to implement each requirement in full, yet separately,\nso we don't end up with 60% functionality in each case by delivering an\naverage or least common denominator solution.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Thu, 07 May 2009 09:54:18 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n\tPG?" }, { "msg_contents": "\n\n\nOn 5/7/09 1:54 AM, \"Simon Riggs\" <[email protected]> wrote:\n\n> \n> \n> On Thu, 2009-05-07 at 10:56 +0800, Craig Ringer wrote:\n>> Tom Lane wrote:\n>> \n>> I also expect that in the future there will be demand for striping data\n>> across multiple partitions in different tablespaces to exploit\n>> in-parallel scanning (when/if supported) for better I/O utilization in\n>> multiple-disk-array situations. For example, partitioning on\n>> \"MOD(id,10)\" across 10 separate volumes, and firing off 10 concurrent\n>> scans, one per partition, to satisfy a query.\n> \n> That's a good summary. It has already been documented and discussed, but\n> saying it again and again is the best way to get this across.\n> \n> You've highlighted that partitioning is a feature with many underlying\n> requirements: infrequent access to data (frequently historical),\n\nActually, infrequent access is not a requirement. It is a common\nrequirement however.\n\nTake for instance, a very large set of data that contains an integer column\n'type_id' that has about 200 distinct values. The data is accessed with a\nstrict 'type_id = X' requirement 99.9% of the time. If this was one large\ntable, then scans of all sorts become much more expensive than if it is\npartitioned on 'type_id'. Furthermore, partitioning on type_id removes the\nrequirement to even index on this value. Statistics on each partition may\nvary significantly, and the plannner can thus adapt to changes in the data\nper value of type_id naturally.\n\nThe raw need is not \"infrequent access\" but highly partitioned access. It\ndoesn't matter if your date-partitioned data is accessed evenly across all\ndates or skewed to the most frequent -- it matters that you are almost\nalways accessing by small date ranges.\n\n> striping for parallelism and getting around RDBMS flaws (if any). We\n> must be careful to implement each requirement in full, yet separately,\n> so we don't end up with 60% functionality in each case by delivering an\n> average or least common denominator solution.\n> \n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 7 May 2009 10:36:58 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n PG?" }, { "msg_contents": "On Wed, May 6, 2009 at 6:08 PM, Simon Riggs <[email protected]> wrote:\n> Agreed. Perhaps I should say then that the syntax needs to express the\n> requirements of the planner/executor behaviour, rather than being the\n> main aspect of the feature, as some have suggested.\n\nAgreed.\n\n> Hopefully, notions of partitioning won't be directly tied to chunking of\n> data for parallel query access. Most queries access recent data and\n> hence only a single partition (or stripe), so partitioning and\n> parallelism and frequently exactly orthogonal.\n\nYes, I think those things are unrelated.\n\n...Robert\n", "msg_date": "Thu, 7 May 2009 22:20:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of PG?" }, { "msg_contents": "On Thu, 7 May 2009, Robert Haas wrote:\n\n> On Wed, May 6, 2009 at 6:08 PM, Simon Riggs <[email protected]> wrote:\n>> Agreed. Perhaps I should say then that the syntax needs to express the\n>> requirements of the planner/executor behaviour, rather than being the\n>> main aspect of the feature, as some have suggested.\n>\n> Agreed.\n>\n>> Hopefully, notions of partitioning won't be directly tied to chunking of\n>> data for parallel query access. Most queries access recent data and\n>> hence only a single partition (or stripe), so partitioning and\n>> parallelism and frequently exactly orthogonal.\n>\n> Yes, I think those things are unrelated.\n\nI'm not so sure (warning, I am relativly inexperianced in this area)\n\nit sounds like you can take two basic approaches to partition a database\n\n1. The Isolation Plan\n\n you want to have it so that your queries match your partitioning.\n\n this is with the goal of only having to query a small number of \nparitions, minimizing the total amount of data touched (including \nminimumizing the number of indexes searched)\n\n this matches the use case mentioned above, with the partition based on \ndate and only looking at the most recent date range.\n\n2. The Load Balancing Plan\n\n you want to have your partitioning and your queries _not_ match as much \nas possible\n\n this is with the goal of having the query hit as many partitions as \npossible, so that the different parts of the search can happen in parallel\n\n\nHowever, with either partitioning plan, you will have queries that \ndegenerate to look like the other plan.\n\nIn the case of the isolation plan, you may need to search for all \ninstances of a rare thing over the entire history (after all, if you never \nneed to access that history, why do you pay for disks to store it? ;-)\n\nand even when you are searching a narrow time window, it may still span \nmultiple partitions. I have a log analysis setup using the Splunk \nprioriatary database, it paritions by time, creating a new parition as the \ncurrent one hits a configurable size (by default 10G on 64 bit systems). \nfor my volume of logs I end up with each parition only covering a few \nhours. it's very common to want to search over a few days, which can be a \nfew dozen partitions (this is out of many hundreds of partitions, so it's \nstill a _huge_ win to narrow the timeframe)\n\n\nIn the case of the load balancing plan, you may run into a query that \nhappens to only fall into one partition (the query matches your \nparitioning logic)\n\n\n\n\nI think the only real difference is how common it is to need to search \nmultiple partitions.\n\nIf the expectation is that you will frequently need to search most/all of \nthe partitions (the load balancing plan), then it's a waste of time to \nanalyse the query to try and figure out which paritions you need to look \nat.\n\nIf the expectation is that you will frequently only need to search a small \nnumber of the partitions (the isolation plan), then it's extremely valuble \nto spend as much time as needed working to analyse the query to try and \nfigure out which partitions you need to look at.\n\n\nI believe that the isolation plan is probably more common than the load \nbalancing plan, but I don't see them as being that different for the \ndatabase engine point of view. To tune a system that can handle the \nisolation plan for load balancing, the key thing to do would be to have a \nknob to disable the partition planning, and just blindly send the search \nout to every partition.\n\nDavid Lang\n", "msg_date": "Thu, 7 May 2009 19:52:13 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n PG?" }, { "msg_contents": "On Thu, May 7, 2009 at 10:52 PM, <[email protected]> wrote:\n>>> Hopefully, notions of partitioning won't be directly tied to chunking of\n>>> data for parallel query access. Most queries access recent data and\n>>> hence only a single partition (or stripe), so partitioning and\n>>> parallelism and frequently exactly orthogonal.\n>>\n>> Yes, I think those things are unrelated.\n>\n> I'm not so sure (warning, I am relativly inexperianced in this area)\n>\n> it sounds like you can take two basic approaches to partition a database\n>\n> 1. The Isolation Plan\n[...]\n> 2. The Load Balancing Plan\n\nWell, even if the table is not partitioned at all, I don't see that it\nshould preclude parallel query access. If I've got a 1 GB table that\nneeds to be sequentially scanned for rows meeting some restriction\nclause, and I have two CPUs and plenty of I/O bandwidth, ISTM it\nshould be possible to have them each scan half of the table and\ncombine the results. Now, this is not easy and there are probably\nsubstantial planner and executor changes required to make it work, but\nI don't know that it would be particularly easier if I had two 500 MB\npartitions instead of a single 1 GB table.\n\nIOW, I don't think you should need to partition if all you want is\nload balancing. Partitioning should be for isolation, and load\nbalancing should happen when appropriate, whether there is\npartitioning involved or not.\n\n...Robert\n", "msg_date": "Fri, 8 May 2009 12:47:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of PG?" }, { "msg_contents": "\nOn 5/7/09 7:52 PM, \"[email protected]\" <[email protected]> wrote:\n\n> \n> \n> I believe that the isolation plan is probably more common than the load\n> balancing plan, but I don't see them as being that different for the\n> database engine point of view. To tune a system that can handle the\n> isolation plan for load balancing, the key thing to do would be to have a\n> knob to disable the partition planning, and just blindly send the search\n> out to every partition.\n\nLots of good points. However, implicit in the above is that the process of\nidentifying which partitions contain the data is expensive.\nRight now it is (1.5 sec if 6000 partitions with the most simple possible\nconstraint (column = CONSTANT).\n\nBut identifying which partitions can contain a value is really nothing more\nthan an index. If you constrain the possible partitioning functions to\nthose where a single partition key can only exist in one partition, then\nthis index and its look up should be very fast even for large partition\ncounts. From what I can tell empirically, the current system does this in\nmore of a sequential scan, running the constraint checks for each\npossibility. \nFurthremore, the actual tables don't have to contain the data if the key is\na column identity function (date = X ) rather than a range or hash.\n\nAt the core, partitioning is really just a form of 'chunky' indexing that\ndoesn't fragment, or need re-indexing, or have much MVCC complexity.\n\n> \n> David Lang\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Fri, 8 May 2009 10:09:15 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n PG?" }, { "msg_contents": "On Fri, 8 May 2009, Robert Haas wrote:\n\n> On Thu, May 7, 2009 at 10:52 PM, <[email protected]> wrote:\n>>>> Hopefully, notions of partitioning won't be directly tied to chunking of\n>>>> data for parallel query access. Most queries access recent data and\n>>>> hence only a single partition (or stripe), so partitioning and\n>>>> parallelism and frequently exactly orthogonal.\n>>>\n>>> Yes, I think those things are unrelated.\n>>\n>> I'm not so sure (warning, I am relativly inexperianced in this area)\n>>\n>> it sounds like you can take two basic approaches to partition a database\n>>\n>> 1. The Isolation Plan\n> [...]\n>> 2. The Load Balancing Plan\n>\n> Well, even if the table is not partitioned at all, I don't see that it\n> should preclude parallel query access. If I've got a 1 GB table that\n> needs to be sequentially scanned for rows meeting some restriction\n> clause, and I have two CPUs and plenty of I/O bandwidth, ISTM it\n> should be possible to have them each scan half of the table and\n> combine the results. Now, this is not easy and there are probably\n> substantial planner and executor changes required to make it work, but\n> I don't know that it would be particularly easier if I had two 500 MB\n> partitions instead of a single 1 GB table.\n>\n> IOW, I don't think you should need to partition if all you want is\n> load balancing. Partitioning should be for isolation, and load\n> balancing should happen when appropriate, whether there is\n> partitioning involved or not.\n\nactually, I will contridict myself slightly.\n\nwith the Isolation Plan there is not nessasarily a need to run the query \non each parition in parallel.\n\n if parallel queries are possible, it will benifit Isolation Plan \nparitioning, but the biggest win with this plan is just reducing the \nnumber of paritions that need to be queried.\n\nwith the Load Balancing Plan there is no benifit in partitioning unless \nyou have the ability to run queries on each parition in parallel\n\n\nusing a seperate back-end process to do a query on a seperate partition is \na fairly straightforward, but not trivial thing to do (there are \ncomplications in merging the result sets, including the need to be able to \ndo part of a query, merge the results, then use those results for the next \nstep in the query)\n\n I would also note that there does not seem to be a huge conceptual \ndifference between doing these parallel queries on one computer and \nshipping the queries off to other computers.\n\n\nhowever, trying to split the work on a single table runs into all sorts of \n'interesting' issues with things needing to be shared between the multiple \nprocesses (they both need to use the same indexes, for example)\n\nso I think that it is much easier for the database engine to efficiantly \nsearch two 500G tables instead of one 1T table.\n\nDavid Lang\n", "msg_date": "Fri, 8 May 2009 11:20:57 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n PG?" }, { "msg_contents": "\nOn 5/8/09 11:20 AM, \"[email protected]\" <[email protected]> wrote:\n> \n> with the Load Balancing Plan there is no benifit in partitioning unless\n> you have the ability to run queries on each parition in parallel\n> \n\nI think there is a benefit to partitioning in this case. If the statistics\non other columns are highly skewed WRT the column(s) partitioned, the\nplanner statistics will be better. It may have to access every partition,\nbut it doesn't have to access every partition in the same way.\n\nPerhaps something like: user_id = 'FOO' is one of the most common vals in\ndate partition A, and one of the least common vals in B, so a where clause\nwith user_id = 'FOO' will sequential scan one and index scan another.\n\nFor really large tables with data correlation that varies significantly,\nthis can be a huge performance gain even if all partitions are accessed.\n\n", "msg_date": "Fri, 8 May 2009 12:40:07 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n PG?" }, { "msg_contents": ">> IOW, I don't think you should need to partition if all you want is\n>> load balancing.  Partitioning should be for isolation, and load\n>> balancing should happen when appropriate, whether there is\n>> partitioning involved or not.\n>\n> actually, I will contridict myself slightly.\n>\n[...]\n> however, trying to split the work on a single table runs into all sorts of\n> 'interesting' issues with things needing to be shared between the multiple\n> processes (they both need to use the same indexes, for example)\n\nI disagree with this part of your email. It is already the case that\ntables and indexes need to support concurrent access by multiple\nPostgres processes. I don't see why that part of the problem would be\nany more difficult for parallel query execution than it would be for\nexecuting two different and unrelated queries on the same table.\n\n> so I think that it is much easier for the database engine to efficiantly\n> search two 500G tables instead of one 1T table.\n\nAnd that leads me to the opposite conclusion on this point.\n\n...Robert\n", "msg_date": "Fri, 8 May 2009 16:53:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of PG?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n>> so I think that it is much easier for the database engine to efficiantly\n>> search two 500G tables instead of one 1T table.\n\n> And that leads me to the opposite conclusion on this point.\n\nI don't think there would be any difference on that score, either.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 May 2009 17:03:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of PG? " }, { "msg_contents": "Robert Haas wrote:\n\n> Well, even if the table is not partitioned at all, I don't see that it\n> should preclude parallel query access. If I've got a 1 GB table that\n> needs to be sequentially scanned for rows meeting some restriction\n> clause, and I have two CPUs and plenty of I/O bandwidth, ISTM it\n> should be possible to have them each scan half of the table and\n> combine the results. Now, this is not easy and there are probably\n> substantial planner and executor changes required to make it work, but\n> I don't know that it would be particularly easier if I had two 500 MB\n> partitions instead of a single 1 GB table.\n\nThe point of partitioning in this scenario is primarily that you can put\nthe different partitions in different tablespaces, most likely on\nindependent disk devices. You therefore get more I/O bandwidth.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 09 May 2009 09:25:38 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transparent table partitioning in future version of\n PG?" } ]
[ { "msg_contents": "Friendly greetings !\nI found something \"odd\" (something that i can't explain) this weekend.\n\nAn octocore server with 32GB of ram, running postgresql 8.3.6\nRunning only postgresql, slony-I and pgbouncer.\n\nJust for testing purpose, i tried a setting with 26GB of shared_buffer.\n\nI quickly noticed that the performances wasn't very good and the\nserver started to swap slowly but surely.\n (but still up to 2000query/second as reported by pgfouine)\n\nIt used all the 2GB of swap.\nI removed the server from production, added 10GB of swap and left it\nfor the weekend with only slony and postgresql up to keep it in sync\nwith the master database.\n\nThis morning i found that the whole 12GB of swap were used :\nMem: 32892008k total, 32714728k used, 177280k free, 70872k buffers\nSwap: 12582896k total, 12531812k used, 51084k free, 27047696k cached\n\n# cat /proc/meminfo\nMemTotal: 32892008 kB\nMemFree: 171140 kB\nBuffers: 70852 kB\nCached: 27065208 kB\nSwapCached: 4752492 kB\nActive: 24362168 kB\nInactive: 7806884 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 32892008 kB\nLowFree: 171140 kB\nSwapTotal: 12582896 kB\nSwapFree: 53064 kB\nDirty: 122636 kB\nWriteback: 0 kB\nAnonPages: 280336 kB\nMapped: 14118588 kB\nSlab: 224632 kB\nPageTables: 235120 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nCommitLimit: 29028900 kB\nCommitted_AS: 28730620 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 12916 kB\nVmallocChunk: 34359725307 kB\n\nWhile i understand that a very high shared_buffer wasn't a good idea,\ni don't understand this behaviour.\nAny tought ?\n\nI tried this setup because having 2 level of data caching doesn't make\nsense to me. (1 in OS filesystem cache and 1 in shm (shared_buffer)).\n\nI'd love to understand what's happening here ! Thank you :)\n\n-- \nF4FQM\nKerunix Flan\nLaurent Laborde\n", "msg_date": "Mon, 4 May 2009 10:10:08 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "high shared buffer and swap" }, { "msg_contents": "Sorry for top-posting - the iphone mail client sucks.\n\nI think what's happening is that the sytem is seeing that some pages \nof shared memory haven't been used recently and because there's more \nshared memory than filesystem cache less recently than the filesystem \ncache pages. So it pages out the shared memory. This is really awful \nbecause we use a kind of lru algorithm for shared memory so the pages \nthat it's paging out are precisely the pges likely to be used soon.\n\nI wonder if we should try to mlock shared buffers.\n\n-- \nGreg\n\n\nOn 4 May 2009, at 10:10, Laurent Laborde <[email protected]> wrote:\n\n> Friendly greetings !\n> I found something \"odd\" (something that i can't explain) this weekend.\n>\n> An octocore server with 32GB of ram, running postgresql 8.3.6\n> Running only postgresql, slony-I and pgbouncer.\n>\n> Just for testing purpose, i tried a setting with 26GB of \n> shared_buffer.\n>\n> I quickly noticed that the performances wasn't very good and the\n> server started to swap slowly but surely.\n> (but still up to 2000query/second as reported by pgfouine)\n>\n> It used all the 2GB of swap.\n> I removed the server from production, added 10GB of swap and left it\n> for the weekend with only slony and postgresql up to keep it in sync\n> with the master database.\n>\n> This morning i found that the whole 12GB of swap were used :\n> Mem: 32892008k total, 32714728k used, 177280k free, 70872k \n> buffers\n> Swap: 12582896k total, 12531812k used, 51084k free, 27047696k \n> cached\n>\n> # cat /proc/meminfo\n> MemTotal: 32892008 kB\n> MemFree: 171140 kB\n> Buffers: 70852 kB\n> Cached: 27065208 kB\n> SwapCached: 4752492 kB\n> Active: 24362168 kB\n> Inactive: 7806884 kB\n> HighTotal: 0 kB\n> HighFree: 0 kB\n> LowTotal: 32892008 kB\n> LowFree: 171140 kB\n> SwapTotal: 12582896 kB\n> SwapFree: 53064 kB\n> Dirty: 122636 kB\n> Writeback: 0 kB\n> AnonPages: 280336 kB\n> Mapped: 14118588 kB\n> Slab: 224632 kB\n> PageTables: 235120 kB\n> NFS_Unstable: 0 kB\n> Bounce: 0 kB\n> CommitLimit: 29028900 kB\n> Committed_AS: 28730620 kB\n> VmallocTotal: 34359738367 kB\n> VmallocUsed: 12916 kB\n> VmallocChunk: 34359725307 kB\n>\n> While i understand that a very high shared_buffer wasn't a good idea,\n> i don't understand this behaviour.\n> Any tought ?\n>\n> I tried this setup because having 2 level of data caching doesn't make\n> sense to me. (1 in OS filesystem cache and 1 in shm (shared_buffer)).\n>\n> I'd love to understand what's happening here ! Thank you :)\n>\n> -- \n> F4FQM\n> Kerunix Flan\n> Laurent Laborde\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 May 2009 10:57:47 +0200", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high shared buffer and swap" }, { "msg_contents": "On Mon, May 04, 2009 at 10:57:47AM +0200, Greg Stark wrote:\n> I think what's happening is that the sytem is seeing that some pages of \n> shared memory haven't been used recently and because there's more shared \n> memory than filesystem cache less recently than the filesystem cache \n> pages. So it pages out the shared memory. This is really awful because we \n> use a kind of lru algorithm for shared memory so the pages that it's \n> paging out are precisely the pges likely to be used soon.\n>\n> I wonder if we should try to mlock shared buffers.\n\nYou can try, but it probably won't work. You often need to be root to\nlock pages, and even on Linux 2.6.9+ where you don't need to be root\nthere's a limit of 32KB (that's only my machine anyway). Sure, that can\nbe changed, if you're root.\n\nActually locking the shared buffers seems to me like a footgun. People\noccasionally give postgresql masses of memory leaving not enough to run\nthe rest of the system. Locking the memory would make the situation\nworse.\n\nPersonally I've never seen a benefit of setting shared buffer above the\nexpected working set size. I generally let the kernel share the\nremaining memory between postgresql disk cache and other processes I\nmight be running. On a NUMA machine you want to be keeping your memory\non the local node and letting the kernel copy that data from elsewhere\nto your local memory when you need it.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Please line up in a tree and maintain the heap invariant while \n> boarding. Thank you for flying nlogn airlines.", "msg_date": "Mon, 4 May 2009 12:15:09 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] high shared buffer and swap" }, { "msg_contents": "On Mon, May 4, 2009 at 2:10 AM, Laurent Laborde <[email protected]> wrote:\n> Friendly greetings !\n> I found something \"odd\" (something that i can't explain) this weekend.\n>\n> An octocore server with 32GB of ram, running postgresql 8.3.6\n> Running only postgresql, slony-I and pgbouncer.\n>\n> Just for testing purpose, i tried a setting with 26GB of shared_buffer.\n>\n> I quickly noticed that the performances wasn't very good and the\n> server started to swap slowly but surely.\n>  (but still up to 2000query/second as reported by pgfouine)\n>\n> It used all the 2GB of swap.\n> I removed the server from production, added 10GB of swap and left it\n> for the weekend with only slony and postgresql up to keep it in sync\n> with the master database.\n>\n> This morning i found that the whole 12GB of swap were used :\n> Mem:  32892008k total, 32714728k used,   177280k free,    70872k buffers\n> Swap: 12582896k total, 12531812k used,    51084k free, 27047696k cached\n\nTry setting swappiness =0.\n\nBut as someone else mentioned, I've alwas had better luck letting the\nOS do most of the caching anyway.\n", "msg_date": "Mon, 4 May 2009 09:07:45 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high shared buffer and swap" }, { "msg_contents": "\n> An octocore server with 32GB of ram, running postgresql 8.3.6\n> Running only postgresql, slony-I and pgbouncer.\n>\n> Just for testing purpose, i tried a setting with 26GB of shared_buffer.\n>\n> I quickly noticed that the performances wasn't very good and the\n> server started to swap slowly but surely.\n> (but still up to 2000query/second as reported by pgfouine)\n>\n> It used all the 2GB of swap.\n> I removed the server from production, added 10GB of swap and left it\n> for the weekend with only slony and postgresql up to keep it in sync\n> with the master database.\n>\n> This morning i found that the whole 12GB of swap were used :\n\n\tHm, do you really need swap with 32Gb of RAM ?\n\n\tOne could argue \"yes but swap is useful to avoid out of memory errors\".\n\tBut if a loaded server starts to swap a lot, it is as good as dead \nanyway...\n", "msg_date": "Tue, 05 May 2009 11:15:51 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] high shared buffer and swap" }, { "msg_contents": "On Tue, May 5, 2009 at 11:15 AM, PFC <[email protected]> wrote:\n>\n>> An octocore server with 32GB of ram, running postgresql 8.3.6\n>> Running only postgresql, slony-I and pgbouncer.\n>>\n>> Just for testing purpose, i tried a setting with 26GB of shared_buffer.\n>>\n>> I quickly noticed that the performances wasn't very good and the\n>> server started to swap slowly but surely.\n>>  (but still up to 2000query/second as reported by pgfouine)\n>>\n>> It used all the 2GB of swap.\n>> I removed the server from production, added 10GB of swap and left it\n>> for the weekend with only slony and postgresql up to keep it in sync\n>> with the master database.\n>>\n>> This morning i found that the whole 12GB of swap were used :\n>\n> Hm, do you really need swap with 32Gb of RAM ?\n> One could argue \"yes but swap is useful to avoid out of memory\n> errors\".\n> But if a loaded server starts to swap a lot, it is as good as dead\n> anyway...\n\nNot really, but we have it.\nI tried with swappinness set to 0 and ... it swaps !\n\nI'm back to 4GB of shared_buffer :)\nI'll try various setting, maybe 16GB, etc ...\nBut my goal was to avoid OS filesystem cache and usage of\nshared_buffer instead : FAIL.\n\n-- \nF4FQM\nKerunix Flan\nLaurent Laborde\n", "msg_date": "Tue, 5 May 2009 12:20:01 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] high shared buffer and swap" }, { "msg_contents": "On Tue, 5 May 2009, Laurent Laborde wrote:\n> I tried with swappinness set to 0 and ... it swaps !\n\nWhile I wouldn't presume to try and teach you to suck eggs, once you set \nswappiness to zero the system will take a while to settle down. The \nswappiness setting will stop it swapping out, but if there is stuff out in \nswap, then it won't force it to swap it in early. So the machine will \nstill thrash when it tries to access something that is out on swap. Easy \nway to solve this is to swapoff the various swap partitions, which will \nforce it all into memory. You can swapon them again afterwards.\n\nMatthew\n\n-- \nSurely the value of C++ is zero, but C's value is now 1?\n -- map36, commenting on the \"No, C++ isn't equal to D. 'C' is undeclared\n [...] C++ should really be called 1\" response to \"C++ -- shouldn't it\n be called D?\"\n", "msg_date": "Tue, 5 May 2009 12:24:42 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] high shared buffer and swap" } ]
[ { "msg_contents": "Is there a way to limit I/O bandwidth/CPU usage of a certain backend? It\nseems that ionice/renice makes no sense because of bgwriter/WAL writer\nprocesses are not a part of a backend. I have a periodic query (every\nhour) that make huge I/O load and should run in background. When this\nquery runs all other fast queries slow down dramatically.\n\n", "msg_date": "Tue, 05 May 2009 16:31:24 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Limit I/O bandwidth of a certain backend" }, { "msg_contents": "On Tue, May 5, 2009 at 2:31 AM, Vlad Arkhipov <[email protected]> wrote:\n> Is there a way to limit I/O bandwidth/CPU usage of a certain backend? It\n> seems that ionice/renice makes no sense because of bgwriter/WAL writer\n> processes are not a part of a backend. I have a periodic query (every\n> hour) that make huge I/O load and should run in background. When this\n> query runs all other fast queries slow down dramatically.\n\nCould you use something like slony to replicate the needed data to a\nsecondary database and run the query there?\n\nBryan\n", "msg_date": "Tue, 5 May 2009 19:54:00 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit I/O bandwidth of a certain backend" }, { "msg_contents": "On Tue, 5 May 2009, Vlad Arkhipov wrote:\n\n> Is there a way to limit I/O bandwidth/CPU usage of a certain backend? It\n> seems that ionice/renice makes no sense because of bgwriter/WAL writer\n> processes are not a part of a backend.\n\nThe background writer and WAL writer are pretty low users of CPU and I/O \nrelative to how much an expensive query uses in most cases. You should \ntry out one of the nice approaches and see if it works for you rather than \npresuming the background processes will thwart you here. There isn't \nanything better available in the database yet, so it's the closest things \nto a good solution available right now anyway.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 6 May 2009 01:10:53 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit I/O bandwidth of a certain backend" } ]
[ { "msg_contents": "Hi,\n\nany idea if there is a more optimal execution plan possible for this query:\n\nselect S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as hbeg,\n H.END_DATE as hend, H.NOTE as hnote\n from HISTORY H, STAT S\n where S.REF = H.REF_STAT\n and H.REF_OBJECT = '0000000001'\n order by H.HORDER ;\n\nEXPLAIN ANALYZE output on 8.4:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=4549.75..4555.76 rows=2404 width=176) (actual\ntime=1.341..1.343 rows=20 loops=1)\n Sort Key: h.horder\n Sort Method: quicksort Memory: 30kB\n -> Hash Join (cost=33.50..4414.75 rows=2404 width=176) (actual\ntime=1.200..1.232 rows=20 loops=1)\n Hash Cond: (h.ref_stat = s.ref)\n -> Index Scan using history_ref_idx on history h\n(cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052\nrows=20 loops=1)\n Index Cond: (ref_object = '0000000001'::bpchar)\n -> Hash (cost=21.00..21.00 rows=1000 width=45) (actual\ntime=1.147..1.147 rows=1000 loops=1)\n -> Seq Scan on stat s (cost=0.00..21.00 rows=1000\nwidth=45) (actual time=0.005..0.325 rows=1000 loops=1)\n Total runtime: 1.442 ms\n(10 rows)\n\nTable HISTORY contains 200M rows, only 20 needed\nTable STAT contains 1000 rows, only 20 needed to be joined to HISTORY values.\n\nTable definitions:\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\ncreate table STAT\n(\n REF CHAR(3) not null,\n NAME CHAR(40) not null,\n NUMB INT not null\n);\n\ncreate table HISTORY\n(\n REF_OBJECT CHAR(10) not null,\n HORDER INT not null,\n REF_STAT CHAR(3) not null,\n BEGIN_DATE CHAR(12) not null,\n END_DATE CHAR(12) ,\n NOTE CHAR(100)\n);\n\ncreate unique index stat_ref_idx on STAT( ref );\ncreate index history_ref_idx on HISTORY( ref_object, horder );\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\nNOTE: The same query runs 2 times faster on MySQL.\n\nAny idea?..\n\nRgds,\n-Dimitri\n", "msg_date": "Wed, 6 May 2009 09:38:59 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> Hi,\n> \n> any idea if there is a more optimal execution plan possible for this query:\n> \n> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as hbeg,\n> H.END_DATE as hend, H.NOTE as hnote\n> from HISTORY H, STAT S\n> where S.REF = H.REF_STAT\n> and H.REF_OBJECT = '0000000001'\n> order by H.HORDER ;\n\nOK, so you're taking a simple:\n\n history INNER JOIN stat ON (stat.ref = history.ref_stat)\n\nthen filtering for records with a particular value of history.ref_object \nand finally performing a sort.\n\nIf I'm reading it right, the plan below does a sequential scan on the \n`stat' table. The stat table only has 1000 rows, so this isn't \nnecessarily an unreasonable choice even if there is an appropriate index \nand even if not many of the rows will be needed.\n\nIt then does an index scan of the history table looking for tuples with \nref_object = '0000000001' (text match). It hash joins the hashed results \nof the initial seq scan to the results of the index scan, and sorts the \nresult.\n\nTo me, that looks pretty reasonable. You might be able to avoid the hash \njoin in favour of a nested loop scan of stat_ref_idx (looping over \nrecords from history.ref_stat where ref_object = '00000000001') by \nproviding a composite index on HISTORY(ref_stat, ref_object). I'm really \nnot too sure, though; plan optimization isn't my thing, I'm just seeing \nif I can offer a few ideas.\n\n> Table definitions:\n\nWhile not strictly necessary, it's a *REALLY* good idea to define a \nsuitable PRIMARY KEY.\n\nAlso, the `CHAR(n)' data type is evil. E.V.I.L. Use `varchar(n)' for \nbounded-length values, or `text' for unbounded fields, unless you REALLY \nwant the crazy behaviour of `CHAR(n)'.\n\nI'm a little bit puzzled about why you seem to be doing lots of things \nwith integer values stored in text strings, but that probably doesn't \nmatter too much for the issue at hand.\n\n> NOTE: The same query runs 2 times faster on MySQL.\n\nWith InnoDB tables and proper transactional safety? Or using scary \nMyISAM tables and a \"just pray\" approach to data integrity? If you're \nusing MyISAM tables I'm not surprised; MySQL with MyISAM is stunningly \nfast, but oh-my-god dangerous.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 06 May 2009 16:01:03 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Craig,\n\nyes, you detailed very well the problem! :-)\nall those CHAR columns are so just due historical issues :-) as well\nthey may contains anything else and not only numbers, that's why..\nAlso, all data inside are fixed, so VARCHAR will not save place, or\nwhat kind of performance issue may we expect with CHAR vs VARCHAR if\nall data have a fixed length?..\n\nAny way to force nested loop without additional index?..\n\nIt's 2 times faster on InnoDB, and as it's just a SELECT query no need\nto go in transaction details :-)\n\nRgds,\n-Dimitri\n\nOn 5/6/09, Craig Ringer <[email protected]> wrote:\n> Dimitri wrote:\n>> Hi,\n>>\n>> any idea if there is a more optimal execution plan possible for this\n>> query:\n>>\n>> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as\n>> hbeg,\n>> H.END_DATE as hend, H.NOTE as hnote\n>> from HISTORY H, STAT S\n>> where S.REF = H.REF_STAT\n>> and H.REF_OBJECT = '0000000001'\n>> order by H.HORDER ;\n>\n> OK, so you're taking a simple:\n>\n> history INNER JOIN stat ON (stat.ref = history.ref_stat)\n>\n> then filtering for records with a particular value of history.ref_object\n> and finally performing a sort.\n>\n> If I'm reading it right, the plan below does a sequential scan on the\n> `stat' table. The stat table only has 1000 rows, so this isn't\n> necessarily an unreasonable choice even if there is an appropriate index\n> and even if not many of the rows will be needed.\n>\n> It then does an index scan of the history table looking for tuples with\n> ref_object = '0000000001' (text match). It hash joins the hashed results\n> of the initial seq scan to the results of the index scan, and sorts the\n> result.\n>\n> To me, that looks pretty reasonable. You might be able to avoid the hash\n> join in favour of a nested loop scan of stat_ref_idx (looping over\n> records from history.ref_stat where ref_object = '00000000001') by\n> providing a composite index on HISTORY(ref_stat, ref_object). I'm really\n> not too sure, though; plan optimization isn't my thing, I'm just seeing\n> if I can offer a few ideas.\n>\n>> Table definitions:\n>\n> While not strictly necessary, it's a *REALLY* good idea to define a\n> suitable PRIMARY KEY.\n>\n> Also, the `CHAR(n)' data type is evil. E.V.I.L. Use `varchar(n)' for\n> bounded-length values, or `text' for unbounded fields, unless you REALLY\n> want the crazy behaviour of `CHAR(n)'.\n>\n> I'm a little bit puzzled about why you seem to be doing lots of things\n> with integer values stored in text strings, but that probably doesn't\n> matter too much for the issue at hand.\n>\n>> NOTE: The same query runs 2 times faster on MySQL.\n>\n> With InnoDB tables and proper transactional safety? Or using scary\n> MyISAM tables and a \"just pray\" approach to data integrity? If you're\n> using MyISAM tables I'm not surprised; MySQL with MyISAM is stunningly\n> fast, but oh-my-god dangerous.\n>\n> --\n> Craig Ringer\n>\n", "msg_date": "Wed, 6 May 2009 10:14:38 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> any idea if there is a more optimal execution plan possible for this query:\n> \n> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as hbeg,\n> H.END_DATE as hend, H.NOTE as hnote\n> from HISTORY H, STAT S\n> where S.REF = H.REF_STAT\n> and H.REF_OBJECT = '0000000001'\n> order by H.HORDER ;\n> \n> EXPLAIN ANALYZE output on 8.4:\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=4549.75..4555.76 rows=2404 width=176) (actual\n> time=1.341..1.343 rows=20 loops=1)\n> Sort Key: h.horder\n> Sort Method: quicksort Memory: 30kB\n> -> Hash Join (cost=33.50..4414.75 rows=2404 width=176) (actual\n> time=1.200..1.232 rows=20 loops=1)\n> Hash Cond: (h.ref_stat = s.ref)\n> -> Index Scan using history_ref_idx on history h\n> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052\n> rows=20 loops=1)\n> Index Cond: (ref_object = '0000000001'::bpchar)\n> -> Hash (cost=21.00..21.00 rows=1000 width=45) (actual\n> time=1.147..1.147 rows=1000 loops=1)\n> -> Seq Scan on stat s (cost=0.00..21.00 rows=1000\n> width=45) (actual time=0.005..0.325 rows=1000 loops=1)\n> Total runtime: 1.442 ms\n> (10 rows)\n> \n> Table HISTORY contains 200M rows, only 20 needed\n> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY values.\n\nThe bad doesn't look too bad to me, although the planner is \nover-estimating the number of matches in the history table (2404 vs 20). \nThat's a bit surprising given how simple the predicate is. Make sure \nyou've ANALYZEd the table. If that's not enough, you can try to increase \nthe statistics target for ref_object column, ie. ALTER TABLE history \nALTER COLUMN ref_object SET STATISTICS 500. That might give you a \ndifferent plan, maybe with a nested loop join instead of hash join, \nwhich might be faster in this case.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 06 May 2009 11:20:58 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> Hi Craig,\n> \n> yes, you detailed very well the problem! :-)\n> all those CHAR columns are so just due historical issues :-) as well\n> they may contains anything else and not only numbers, that's why..\n> Also, all data inside are fixed, so VARCHAR will not save place, or\n> what kind of performance issue may we expect with CHAR vs VARCHAR if\n> all data have a fixed length?..\n\nNone in postgres, but the char/varchar thing may or may not bite you at \nsome point later - sounds like you have it covered though.\n\n> It's 2 times faster on InnoDB, and as it's just a SELECT query no need\n> to go in transaction details :-)\n\n Total runtime: 1.442 ms\n(10 rows)\n\nYou posted a query that's taking 2/1000's of a second. I don't really \nsee a performance problem here :)\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Wed, 06 May 2009 18:22:27 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Heikki,\n\nI've already tried a target 1000 and the only thing it changes\ncomparing to the current 100 (default) is instead of 2404 rows it says\n240 rows, but the plan remaining the same..\n\nRgds,\n-Dimitri\n\nOn 5/6/09, Heikki Linnakangas <[email protected]> wrote:\n> Dimitri wrote:\n>> any idea if there is a more optimal execution plan possible for this\n>> query:\n>>\n>> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as\n>> hbeg,\n>> H.END_DATE as hend, H.NOTE as hnote\n>> from HISTORY H, STAT S\n>> where S.REF = H.REF_STAT\n>> and H.REF_OBJECT = '0000000001'\n>> order by H.HORDER ;\n>>\n>> EXPLAIN ANALYZE output on 8.4:\n>> QUERY\n>> PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=4549.75..4555.76 rows=2404 width=176) (actual\n>> time=1.341..1.343 rows=20 loops=1)\n>> Sort Key: h.horder\n>> Sort Method: quicksort Memory: 30kB\n>> -> Hash Join (cost=33.50..4414.75 rows=2404 width=176) (actual\n>> time=1.200..1.232 rows=20 loops=1)\n>> Hash Cond: (h.ref_stat = s.ref)\n>> -> Index Scan using history_ref_idx on history h\n>> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052\n>> rows=20 loops=1)\n>> Index Cond: (ref_object = '0000000001'::bpchar)\n>> -> Hash (cost=21.00..21.00 rows=1000 width=45) (actual\n>> time=1.147..1.147 rows=1000 loops=1)\n>> -> Seq Scan on stat s (cost=0.00..21.00 rows=1000\n>> width=45) (actual time=0.005..0.325 rows=1000 loops=1)\n>> Total runtime: 1.442 ms\n>> (10 rows)\n>>\n>> Table HISTORY contains 200M rows, only 20 needed\n>> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY\n>> values.\n>\n> The bad doesn't look too bad to me, although the planner is\n> over-estimating the number of matches in the history table (2404 vs 20).\n> That's a bit surprising given how simple the predicate is. Make sure\n> you've ANALYZEd the table. If that's not enough, you can try to increase\n> the statistics target for ref_object column, ie. ALTER TABLE history\n> ALTER COLUMN ref_object SET STATISTICS 500. That might give you a\n> different plan, maybe with a nested loop join instead of hash join,\n> which might be faster in this case.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n", "msg_date": "Wed, 6 May 2009 10:31:03 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Chris,\n\nthe only problem I see here is it's 2 times slower vs InnoDB, so\nbefore I'll say myself it's ok I want to be sure there is nothing else\nto do.. :-)\n\nRgds,\n-Dimitri\n\n\nOn 5/6/09, Chris <[email protected]> wrote:\n> Dimitri wrote:\n>> Hi Craig,\n>>\n>> yes, you detailed very well the problem! :-)\n>> all those CHAR columns are so just due historical issues :-) as well\n>> they may contains anything else and not only numbers, that's why..\n>> Also, all data inside are fixed, so VARCHAR will not save place, or\n>> what kind of performance issue may we expect with CHAR vs VARCHAR if\n>> all data have a fixed length?..\n>\n> None in postgres, but the char/varchar thing may or may not bite you at\n> some point later - sounds like you have it covered though.\n>\n>> It's 2 times faster on InnoDB, and as it's just a SELECT query no need\n>> to go in transaction details :-)\n>\n> Total runtime: 1.442 ms\n> (10 rows)\n>\n> You posted a query that's taking 2/1000's of a second. I don't really\n> see a performance problem here :)\n>\n> --\n> Postgresql & php tutorials\n> http://www.designmagick.com/\n>\n>\n", "msg_date": "Wed, 6 May 2009 10:33:42 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> Hi Chris,\n> \n> the only problem I see here is it's 2 times slower vs InnoDB\n\nHow do you know? This isn't just based on the explain values reported, \nis it?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 06 May 2009 09:40:46 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Richard,\n\nno, of course it's not based on explain :-)\nI've run several tests before and now going in depth to understand if\nthere is nothing wrong. Due such a single query time difference InnoDB\nis doing 2-3 times better TPS level comparing to PostgreSQL..\n\nRgds,\n-Dimitri\n\n\nOn 5/6/09, Richard Huxton <[email protected]> wrote:\n> Dimitri wrote:\n>> Hi Chris,\n>>\n>> the only problem I see here is it's 2 times slower vs InnoDB\n>\n> How do you know? This isn't just based on the explain values reported,\n> is it?\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n", "msg_date": "Wed, 6 May 2009 10:49:32 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> Hi Richard,\n> \n> no, of course it's not based on explain :-)\n> I've run several tests before and now going in depth to understand if\n> there is nothing wrong. Due such a single query time difference InnoDB\n> is doing 2-3 times better TPS level comparing to PostgreSQL..\n\nAnd you are satisfied that it is the planned query time that is the \ndominant factor here, and not parsing time, connection time, data \ntransport, disk bandwidth etc?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 06 May 2009 10:02:29 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Wed, 6 May 2009, Heikki Linnakangas wrote:\n>> Total runtime: 1.442 ms\n\nIt's pretty clear that this query isn't even going to disc - it's all CPU \ntime. That can be the case if you run the exact same query more than once, \nand it can cause your EXPLAIN output to be vastly different from your real \nuse case. Do the queries on the live system hit the disc at all?\n\n> The bad doesn't look too bad to me, although the planner is over-estimating \n> the number of matches in the history table (2404 vs 20). That's a bit \n> surprising given how simple the predicate is. Make sure you've ANALYZEd the \n> table. If that's not enough, you can try to increase the statistics target \n> for ref_object column, ie. ALTER TABLE history ALTER COLUMN ref_object SET \n> STATISTICS 500.\n\nI would have thought this would actually make it slower, by increasing the \ntime taken to plan. On such small queries, the planner overhead must be \nquite significant.\n\nMatthew\n\n-- \n Q: What's the difference between ignorance and apathy?\n A: I don't know, and I don't care.\n", "msg_date": "Wed, 6 May 2009 10:35:50 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Wed, May 6, 2009 at 3:38 AM, Dimitri <[email protected]> wrote:\n> Hi,\n>\n> any idea if there is a more optimal execution plan possible for this query:\n>\n> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as hbeg,\n>        H.END_DATE as hend, H.NOTE as hnote\n>         from HISTORY H, STAT S\n>         where S.REF = H.REF_STAT\n>         and H.REF_OBJECT = '0000000001'\n>         order by H.HORDER ;\n>\n> EXPLAIN ANALYZE output on 8.4:\n>                                                                   QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------\n>  Sort  (cost=4549.75..4555.76 rows=2404 width=176) (actual\n> time=1.341..1.343 rows=20 loops=1)\n>   Sort Key: h.horder\n>   Sort Method:  quicksort  Memory: 30kB\n>   ->  Hash Join  (cost=33.50..4414.75 rows=2404 width=176) (actual\n> time=1.200..1.232 rows=20 loops=1)\n>         Hash Cond: (h.ref_stat = s.ref)\n>         ->  Index Scan using history_ref_idx on history h\n> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052\n> rows=20 loops=1)\n>               Index Cond: (ref_object = '0000000001'::bpchar)\n>         ->  Hash  (cost=21.00..21.00 rows=1000 width=45) (actual\n> time=1.147..1.147 rows=1000 loops=1)\n>               ->  Seq Scan on stat s  (cost=0.00..21.00 rows=1000\n> width=45) (actual time=0.005..0.325 rows=1000 loops=1)\n>  Total runtime: 1.442 ms\n> (10 rows)\n>\n> Table HISTORY contains 200M rows, only 20 needed\n> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY values.\n>\n> Table definitions:\n> \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n> create table STAT\n> (\n>    REF                 CHAR(3)            not null,\n>    NAME                CHAR(40)           not null,\n>    NUMB                INT                not null\n> );\n>\n> create table HISTORY\n> (\n>    REF_OBJECT          CHAR(10)              not null,\n>    HORDER              INT                   not null,\n>    REF_STAT            CHAR(3)               not null,\n>    BEGIN_DATE          CHAR(12)              not null,\n>    END_DATE            CHAR(12)                      ,\n>    NOTE                CHAR(100)\n> );\n>\n> create unique index stat_ref_idx on STAT( ref );\n> create index history_ref_idx on HISTORY( ref_object, horder );\n> \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n>\n> NOTE: The same query runs 2 times faster on MySQL.\n\ncouple of things to try:\n*) as others have noted, get rid of char() columns. use varchar, or\nint if you can. this is a bigger deal in postgres than mysql.\n*) curious if disabling sequential scan helps (set enable_seqscan =\nfalse) or changes the plan. .3 msec is spent on seq scan and an index\nlookup is likely much faster.\n*) prepare the query:\n\nprepare history_stat(char(10) as\n select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as hbeg,\n H.END_DATE as hend, H.NOTE as hnote\n from HISTORY H, STAT S\n where S.REF = H.REF_STAT\n and H.REF_OBJECT = $1\n order by H.HORDER ;\n\nexecute history_stat('0000000001');\n\n(prepared queries have some annoyances you need to be prepared to\ndeal with. however, they are quite useful when squeezing every last\nmsec out of fast queries).\n\nmerlin\n", "msg_date": "Wed, 6 May 2009 07:46:05 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Wed, May 6, 2009 at 7:46 AM, Merlin Moncure <[email protected]> wrote:\n> prepare history_stat(char(10) as\n\ntypo:\nprepare history_stat(char(10)) as\n", "msg_date": "Wed, 6 May 2009 07:47:22 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> I've run several tests before and now going in depth to understand if\n> there is nothing wrong. Due such a single query time difference InnoDB\n> is doing 2-3 times better TPS level comparing to PostgreSQL..\n\nWhy don't you use MySQL then?\nOr tune PostgreSQL?\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 6 May 2009 13:57:32 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "I'll try to answer all mails at once :-))\n\n- query is running fully in RAM, no I/O, no network, only CPU time\n\n- looping 100 times the same query gives 132ms total time (~1.32ms per\nquery), while it's 44ms on InnoDB (~0.44ms per query)\n\n- disabling seq scan forcing a planner to use an index scan, and\nfinally it worse as gives 1.53ms per query..\n\n- prepare the query helps: prepare statement takes 16ms, but execute\nruns in 0.98ms = which make me think it's not only a planner\noverhead... And it's still 2 times lower vs 0.44ms.\nAlso, generally prepare cannot be used in this test case as we suppose\nany query may be of any kind (even if it's not always true :-))\n\n- char or varchar should be used here because the reference code is\nsupposed to accept any characters (alphanumeric)\n\n- it also reminds me that probably there are some extra CPU time due\nlocale setting - but all my \"lc_*\" variables are set to \"C\"...\n\nRgds,\n-Dimitri\n\n\nOn 5/6/09, Merlin Moncure <[email protected]> wrote:\n> On Wed, May 6, 2009 at 7:46 AM, Merlin Moncure <[email protected]> wrote:\n>> prepare history_stat(char(10) as\n>\n> typo:\n> prepare history_stat(char(10)) as\n>\n", "msg_date": "Wed, 6 May 2009 14:33:13 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "The story is simple: for the launching of MySQL 5.4 I've done a\ntesting comparing available on that time variations of InnoDB engines,\nand at the end by curiosity started the same test with PostgreSQL\n8.3.7 to see if MySQL performance level is more close to PostgreSQL\nnow (PG was a strong true winner before). For my big surprise MySQL\n5.4 outpassed 8.3.7...\nHowever, analyzing the PostgreSQL processing I got a feeling something\ngoes wrong on PG side.. So, now I've installed both 8.3.7 and 8.4beta1\nto see more in depth what's going on. Currently 8.4 performs much\nbetter than 8.3.7, but there is still a room for improvement if such a\nsmall query may go faster :-)\n\nRgds,\n-Dimitri\n\nOn 5/6/09, Albe Laurenz <[email protected]> wrote:\n> Dimitri wrote:\n>> I've run several tests before and now going in depth to understand if\n>> there is nothing wrong. Due such a single query time difference InnoDB\n>> is doing 2-3 times better TPS level comparing to PostgreSQL..\n>\n> Why don't you use MySQL then?\n> Or tune PostgreSQL?\n>\n> Yours,\n> Laurenz Albe\n>\n", "msg_date": "Wed, 6 May 2009 14:49:23 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> I'll try to answer all mails at once :-))\n> \n> - query is running fully in RAM, no I/O, no network, only CPU time\n> \n> - looping 100 times the same query gives 132ms total time (~1.32ms per\n> query), while it's 44ms on InnoDB (~0.44ms per query)\n\nWell, assuming you're happy that PG is tuned reasonably for your machine \nand that MySQL's query cache isn't returning the results here it looks \nlike MySQL is faster for this particular query.\n\nThe only obvious place there could be a big gain is with the hashing \nalgorithm. If you remove the ORDER BY and the query-time doesn't fall by \nmuch then it's the hash phase.\n\nThe other thing to try is to alter the query to be a SELECT count(*) \nrather than returning rows - that will let you measure the time to \ntransfer the result rows.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 06 May 2009 13:53:45 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Wed, May 06, 2009 at 04:01:03PM +0800, Craig Ringer wrote:\n> Dimitri wrote:\n>> Hi,\n>> any idea if there is a more optimal execution plan possible for this \n>> query:\n>> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as \n>> hbeg,\n>> H.END_DATE as hend, H.NOTE as hnote\n>> from HISTORY H, STAT S\n>> where S.REF = H.REF_STAT\n>> and H.REF_OBJECT = '0000000001'\n>> order by H.HORDER ;\n>\n> OK, so you're taking a simple:\n>\n> history INNER JOIN stat ON (stat.ref = history.ref_stat)\n>\n> then filtering for records with a particular value of history.ref_object \n> and finally performing a sort.\n>\n> If I'm reading it right, the plan below does a sequential scan on the \n> `stat' table. The stat table only has 1000 rows, so this isn't necessarily \n> an unreasonable choice even if there is an appropriate index and even if \n> not many of the rows will be needed.\n>\n> It then does an index scan of the history table looking for tuples with \n> ref_object = '0000000001' (text match). It hash joins the hashed results of \n> the initial seq scan to the results of the index scan, and sorts the \n> result.\n>\n> To me, that looks pretty reasonable. You might be able to avoid the hash \n> join in favour of a nested loop scan of stat_ref_idx (looping over records \n> from history.ref_stat where ref_object = '00000000001') by providing a \n> composite index on HISTORY(ref_stat, ref_object). I'm really not too sure, \n> though; plan optimization isn't my thing, I'm just seeing if I can offer a \n> few ideas.\n>\n>> Table definitions:\n>\n> While not strictly necessary, it's a *REALLY* good idea to define a \n> suitable PRIMARY KEY.\n>\n> Also, the `CHAR(n)' data type is evil. E.V.I.L. Use `varchar(n)' for \n> bounded-length values, or `text' for unbounded fields, unless you REALLY \n> want the crazy behaviour of `CHAR(n)'.\n>\n> I'm a little bit puzzled about why you seem to be doing lots of things with \n> integer values stored in text strings, but that probably doesn't matter too \n> much for the issue at hand.\n>\n>> NOTE: The same query runs 2 times faster on MySQL.\n>\n> With InnoDB tables and proper transactional safety? Or using scary MyISAM \n> tables and a \"just pray\" approach to data integrity? If you're using MyISAM \n> tables I'm not surprised; MySQL with MyISAM is stunningly fast, but \n> oh-my-god dangerous.\n>\n> --\n> Craig Ringer\n>\nI just thought I would ask. Are you using the query cache in MySQL?\nIf that is on, that could be the difference. Another thing to check,\ntry issuing the selects concurrently: 2 at a time, 5 at a time, 10\nat a time... and see if that has an effect on timing. In many of the\nbenchmarks, MySQL will out perform PostgreSQL for very low numbers of\nclients. Once you are using more than a handful, PostgreSQL pulls\nahead. Also, is this a completely static table? i.e. no updates or\ninserts. How is the performance with those happening? This should\nhelp you get a clearer picture of the performance.\n\nMy two cents.\nKen\n", "msg_date": "Wed, 6 May 2009 07:58:55 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Wed, May 06, 2009 at 02:49:23PM +0200, Dimitri wrote:\n> The story is simple: for the launching of MySQL 5.4 I've done a\n> testing comparing available on that time variations of InnoDB engines,\n> and at the end by curiosity started the same test with PostgreSQL\n> 8.3.7 to see if MySQL performance level is more close to PostgreSQL\n> now (PG was a strong true winner before). For my big surprise MySQL\n> 5.4 outpassed 8.3.7...\n> However, analyzing the PostgreSQL processing I got a feeling something\n> goes wrong on PG side.. So, now I've installed both 8.3.7 and 8.4beta1\n> to see more in depth what's going on. Currently 8.4 performs much\n> better than 8.3.7, but there is still a room for improvement if such a\n> small query may go faster :-)\n> \n> Rgds,\n> -Dimitri\n> \n> On 5/6/09, Albe Laurenz <[email protected]> wrote:\n> > Dimitri wrote:\n> >> I've run several tests before and now going in depth to understand if\n> >> there is nothing wrong. Due such a single query time difference InnoDB\n> >> is doing 2-3 times better TPS level comparing to PostgreSQL..\n> >\n> > Why don't you use MySQL then?\n> > Or tune PostgreSQL?\n> >\n> > Yours,\n> > Laurenz Albe\n> >\n\nAnother thought, have you tuned PostgreSQL for an in memory database?\nThose tuning options may be what is needed to improve the plan chosen\nby PostgreSQL.\n\nCheers,\nKen\n", "msg_date": "Wed, 6 May 2009 08:05:10 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn May 6, 2009, at 7:53 AM, Richard Huxton wrote:\n\n> Dimitri wrote:\n>> I'll try to answer all mails at once :-))\n>> - query is running fully in RAM, no I/O, no network, only CPU time\n>> - looping 100 times the same query gives 132ms total time (~1.32ms \n>> per\n>> query), while it's 44ms on InnoDB (~0.44ms per query)\n>\n> Well, assuming you're happy that PG is tuned reasonably for your \n> machine and that MySQL's query cache isn't returning the results \n> here it looks like MySQL is faster for this particular query.\n>\n> The only obvious place there could be a big gain is with the hashing \n> algorithm. If you remove the ORDER BY and the query-time doesn't \n> fall by much then it's the hash phase.\n>\n> The other thing to try is to alter the query to be a SELECT count(*) \n> rather than returning rows - that will let you measure the time to \n> transfer the result rows.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n\n\nDo you expect to run this query 100 times per second during your \napplication?\nor is this just a test to see how fast the query is for optimalisation.\n\nI always get scared myself with such a test as 'runs out of memory', \nreason\ngiven is that usually this is not really the case in a production \nenvironment.\n\nTry to make a little test case where you give the query random \nparameters\nso different result sets are returned. This will give you a better \nidea on how\nfast the query really is and might give you better comparison results.\n\ninstead of count(*) I isusallt do explain analyze to see how fast \nPostgreSQL handles to query.\n\nRies\n\n\n", "msg_date": "Wed, 6 May 2009 08:08:01 -0500", "msg_from": "Ries van Twisk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Folks, first of all:\n\n - I used a fixed reference value just to simplify the case analyzing\nand isolate it as max as possible, of course during my tests all\nvalues are random :-)\n\n- final goal of the test is to analyze scalability, so yes, concurrent\nsessions with random keys are growing from 1 to 256 (I run it on\n32cores server, no think time, just stressing), and the result is\nstill not yet better comparing to InnoDB\n\n- I'm analyzing this query running in memory to understand what's\nblocking while all main bottlenecks are avoided (no I/O anymore nor\nnetwork, etc.)\n\n- initial explain analyze and table details were posted in the first message\n\n\nNow, let's go more further:\n\n - so \"as it\" query execution took 1.50ms\n\n - after removing \"order by\" it took 1.19ms\n\n - select count(*) instead of columns and with removed \"order by\" took 0.98ms\n\n- execute of the same prepared \"select count(*) ...\" took 0.68ms\n\nSo, where the time is going?...\n\nRgds,\n-Dimitri\n\n\nOn 5/6/09, Ries van Twisk <[email protected]> wrote:\n>\n> On May 6, 2009, at 7:53 AM, Richard Huxton wrote:\n>\n>> Dimitri wrote:\n>>> I'll try to answer all mails at once :-))\n>>> - query is running fully in RAM, no I/O, no network, only CPU time\n>>> - looping 100 times the same query gives 132ms total time (~1.32ms\n>>> per\n>>> query), while it's 44ms on InnoDB (~0.44ms per query)\n>>\n>> Well, assuming you're happy that PG is tuned reasonably for your\n>> machine and that MySQL's query cache isn't returning the results\n>> here it looks like MySQL is faster for this particular query.\n>>\n>> The only obvious place there could be a big gain is with the hashing\n>> algorithm. If you remove the ORDER BY and the query-time doesn't\n>> fall by much then it's the hash phase.\n>>\n>> The other thing to try is to alter the query to be a SELECT count(*)\n>> rather than returning rows - that will let you measure the time to\n>> transfer the result rows.\n>>\n>> --\n>> Richard Huxton\n>> Archonet Ltd\n>>\n>\n>\n> Do you expect to run this query 100 times per second during your\n> application?\n> or is this just a test to see how fast the query is for optimalisation.\n>\n> I always get scared myself with such a test as 'runs out of memory',\n> reason\n> given is that usually this is not really the case in a production\n> environment.\n>\n> Try to make a little test case where you give the query random\n> parameters\n> so different result sets are returned. This will give you a better\n> idea on how\n> fast the query really is and might give you better comparison results.\n>\n> instead of count(*) I isusallt do explain analyze to see how fast\n> PostgreSQL handles to query.\n>\n> Ries\n>\n>\n>\n", "msg_date": "Wed, 6 May 2009 16:03:42 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> Hi Chris,\n> \n> the only problem I see here is it's 2 times slower vs InnoDB, so\n> before I'll say myself it's ok I want to be sure there is nothing else\n> to do.. :-)\n\nCan the genetic query optimizer come into play on small queries?\n\n--\nCraig Ringer\n", "msg_date": "Wed, 06 May 2009 22:04:33 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "No.\n\nKen\nOn Wed, May 06, 2009 at 10:04:33PM +0800, Craig Ringer wrote:\n> Dimitri wrote:\n> > Hi Chris,\n> > \n> > the only problem I see here is it's 2 times slower vs InnoDB, so\n> > before I'll say myself it's ok I want to be sure there is nothing else\n> > to do.. :-)\n> \n> Can the genetic query optimizer come into play on small queries?\n> \n> --\n> Craig Ringer\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n", "msg_date": "Wed, 6 May 2009 09:26:32 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "> On Wed, May 06, 2009 at 10:04:33PM +0800, Craig Ringer wrote:\n>> Can the genetic query optimizer come into play on small queries?\n\nOn Wed, 6 May 2009, Kenneth Marshall wrote:\n> No.\n\nYes. But you would have had to have set some really weird configuration.\n\nMatthew\n\n-- \n And the lexer will say \"Oh look, there's a null string. Oooh, there's \n another. And another.\", and will fall over spectacularly when it realises\n there are actually rather a lot.\n - Computer Science Lecturer (edited)\n", "msg_date": "Wed, 6 May 2009 15:28:46 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "I supposed in case with prepare and then execute a query optimizer is\nno more coming in play on \"execute\" phase, or did I miss something?..\n\nForget to say: query cache is disabled on MySQL side.\n\nRgds,\n-Dimitri\n\nOn 5/6/09, Craig Ringer <[email protected]> wrote:\n> Dimitri wrote:\n>> Hi Chris,\n>>\n>> the only problem I see here is it's 2 times slower vs InnoDB, so\n>> before I'll say myself it's ok I want to be sure there is nothing else\n>> to do.. :-)\n>\n> Can the genetic query optimizer come into play on small queries?\n>\n> --\n> Craig Ringer\n>\n", "msg_date": "Wed, 6 May 2009 16:31:35 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Wed, May 06, 2009 at 09:38:59AM +0200, Dimitri wrote:\n> Hi,\n> \n> any idea if there is a more optimal execution plan possible for this query:\n> \n> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as hbeg,\n> H.END_DATE as hend, H.NOTE as hnote\n> from HISTORY H, STAT S\n> where S.REF = H.REF_STAT\n> and H.REF_OBJECT = '0000000001'\n> order by H.HORDER ;\n> \n> EXPLAIN ANALYZE output on 8.4:\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=4549.75..4555.76 rows=2404 width=176) (actual\n> time=1.341..1.343 rows=20 loops=1)\n> Sort Key: h.horder\n> Sort Method: quicksort Memory: 30kB\n> -> Hash Join (cost=33.50..4414.75 rows=2404 width=176) (actual\n> time=1.200..1.232 rows=20 loops=1)\n> Hash Cond: (h.ref_stat = s.ref)\n> -> Index Scan using history_ref_idx on history h\n> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052\n> rows=20 loops=1)\n> Index Cond: (ref_object = '0000000001'::bpchar)\n> -> Hash (cost=21.00..21.00 rows=1000 width=45) (actual\n> time=1.147..1.147 rows=1000 loops=1)\n> -> Seq Scan on stat s (cost=0.00..21.00 rows=1000\n> width=45) (actual time=0.005..0.325 rows=1000 loops=1)\n> Total runtime: 1.442 ms\n> (10 rows)\n> \n> Table HISTORY contains 200M rows, only 20 needed\n> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY values.\n> \n> Table definitions:\n> \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n> create table STAT\n> (\n> REF CHAR(3) not null,\n> NAME CHAR(40) not null,\n> NUMB INT not null\n> );\n> \n> create table HISTORY\n> (\n> REF_OBJECT CHAR(10) not null,\n> HORDER INT not null,\n> REF_STAT CHAR(3) not null,\n> BEGIN_DATE CHAR(12) not null,\n> END_DATE CHAR(12) ,\n> NOTE CHAR(100)\n> );\n> \n> create unique index stat_ref_idx on STAT( ref );\n> create index history_ref_idx on HISTORY( ref_object, horder );\n> \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n> \n> NOTE: The same query runs 2 times faster on MySQL.\n> \n> Any idea?..\n> \n> Rgds,\n> -Dimitri\n> \nDimitri,\n\nIs there any chance of profiling the postgres backend to see\nwhere the time is used?\n\nJust an idea,\nKen\n", "msg_date": "Wed, 6 May 2009 09:34:46 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Ken,\n\nyes, I may do it, but I did not expect to come into profiling initially :-)\nI expected there is just something trivial within a plan that I just\ndon't know.. :-)\n\nBTW, is there already an integrated profiled within a code? or do I\nneed external tools?..\n\nRgds,\n-Dimitri\n\nOn 5/6/09, Kenneth Marshall <[email protected]> wrote:\n> On Wed, May 06, 2009 at 09:38:59AM +0200, Dimitri wrote:\n>> Hi,\n>>\n>> any idea if there is a more optimal execution plan possible for this\n>> query:\n>>\n>> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as\n>> hbeg,\n>> H.END_DATE as hend, H.NOTE as hnote\n>> from HISTORY H, STAT S\n>> where S.REF = H.REF_STAT\n>> and H.REF_OBJECT = '0000000001'\n>> order by H.HORDER ;\n>>\n>> EXPLAIN ANALYZE output on 8.4:\n>> QUERY\n>> PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=4549.75..4555.76 rows=2404 width=176) (actual\n>> time=1.341..1.343 rows=20 loops=1)\n>> Sort Key: h.horder\n>> Sort Method: quicksort Memory: 30kB\n>> -> Hash Join (cost=33.50..4414.75 rows=2404 width=176) (actual\n>> time=1.200..1.232 rows=20 loops=1)\n>> Hash Cond: (h.ref_stat = s.ref)\n>> -> Index Scan using history_ref_idx on history h\n>> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052\n>> rows=20 loops=1)\n>> Index Cond: (ref_object = '0000000001'::bpchar)\n>> -> Hash (cost=21.00..21.00 rows=1000 width=45) (actual\n>> time=1.147..1.147 rows=1000 loops=1)\n>> -> Seq Scan on stat s (cost=0.00..21.00 rows=1000\n>> width=45) (actual time=0.005..0.325 rows=1000 loops=1)\n>> Total runtime: 1.442 ms\n>> (10 rows)\n>>\n>> Table HISTORY contains 200M rows, only 20 needed\n>> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY\n>> values.\n>>\n>> Table definitions:\n>> \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n>> create table STAT\n>> (\n>> REF CHAR(3) not null,\n>> NAME CHAR(40) not null,\n>> NUMB INT not null\n>> );\n>>\n>> create table HISTORY\n>> (\n>> REF_OBJECT CHAR(10) not null,\n>> HORDER INT not null,\n>> REF_STAT CHAR(3) not null,\n>> BEGIN_DATE CHAR(12) not null,\n>> END_DATE CHAR(12) ,\n>> NOTE CHAR(100)\n>> );\n>>\n>> create unique index stat_ref_idx on STAT( ref );\n>> create index history_ref_idx on HISTORY( ref_object, horder );\n>> \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n>>\n>> NOTE: The same query runs 2 times faster on MySQL.\n>>\n>> Any idea?..\n>>\n>> Rgds,\n>> -Dimitri\n>>\n> Dimitri,\n>\n> Is there any chance of profiling the postgres backend to see\n> where the time is used?\n>\n> Just an idea,\n> Ken\n>\n", "msg_date": "Wed, 6 May 2009 16:48:21 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Wed, May 06, 2009 at 04:48:21PM +0200, Dimitri wrote:\n> Hi Ken,\n> \n> yes, I may do it, but I did not expect to come into profiling initially :-)\n> I expected there is just something trivial within a plan that I just\n> don't know.. :-)\n> \n> BTW, is there already an integrated profiled within a code? or do I\n> need external tools?..\n> \n> Rgds,\n> -Dimitri\n\nI only suggested it because it might have the effect of changing\nthe sequential scan on the stat table to an indexed scan.\n\nCheers,\nKen\n> \n> On 5/6/09, Kenneth Marshall <[email protected]> wrote:\n> > On Wed, May 06, 2009 at 09:38:59AM +0200, Dimitri wrote:\n> >> Hi,\n> >>\n> >> any idea if there is a more optimal execution plan possible for this\n> >> query:\n> >>\n> >> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as\n> >> hbeg,\n> >> H.END_DATE as hend, H.NOTE as hnote\n> >> from HISTORY H, STAT S\n> >> where S.REF = H.REF_STAT\n> >> and H.REF_OBJECT = '0000000001'\n> >> order by H.HORDER ;\n> >>\n> >> EXPLAIN ANALYZE output on 8.4:\n> >> QUERY\n> >> PLAN\n> >> ------------------------------------------------------------------------------------------------------------------------------------------------\n> >> Sort (cost=4549.75..4555.76 rows=2404 width=176) (actual\n> >> time=1.341..1.343 rows=20 loops=1)\n> >> Sort Key: h.horder\n> >> Sort Method: quicksort Memory: 30kB\n> >> -> Hash Join (cost=33.50..4414.75 rows=2404 width=176) (actual\n> >> time=1.200..1.232 rows=20 loops=1)\n> >> Hash Cond: (h.ref_stat = s.ref)\n> >> -> Index Scan using history_ref_idx on history h\n> >> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052\n> >> rows=20 loops=1)\n> >> Index Cond: (ref_object = '0000000001'::bpchar)\n> >> -> Hash (cost=21.00..21.00 rows=1000 width=45) (actual\n> >> time=1.147..1.147 rows=1000 loops=1)\n> >> -> Seq Scan on stat s (cost=0.00..21.00 rows=1000\n> >> width=45) (actual time=0.005..0.325 rows=1000 loops=1)\n> >> Total runtime: 1.442 ms\n> >> (10 rows)\n> >>\n> >> Table HISTORY contains 200M rows, only 20 needed\n> >> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY\n> >> values.\n> >>\n> >> Table definitions:\n> >> \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n> >> create table STAT\n> >> (\n> >> REF CHAR(3) not null,\n> >> NAME CHAR(40) not null,\n> >> NUMB INT not null\n> >> );\n> >>\n> >> create table HISTORY\n> >> (\n> >> REF_OBJECT CHAR(10) not null,\n> >> HORDER INT not null,\n> >> REF_STAT CHAR(3) not null,\n> >> BEGIN_DATE CHAR(12) not null,\n> >> END_DATE CHAR(12) ,\n> >> NOTE CHAR(100)\n> >> );\n> >>\n> >> create unique index stat_ref_idx on STAT( ref );\n> >> create index history_ref_idx on HISTORY( ref_object, horder );\n> >> \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n> >>\n> >> NOTE: The same query runs 2 times faster on MySQL.\n> >>\n> >> Any idea?..\n> >>\n> >> Rgds,\n> >> -Dimitri\n> >>\n> > Dimitri,\n> >\n> > Is there any chance of profiling the postgres backend to see\n> > where the time is used?\n> >\n> > Just an idea,\n> > Ken\n> >\n> \n", "msg_date": "Wed, 6 May 2009 10:49:18 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Wed, 2009-05-06 at 10:31 +0200, Dimitri wrote:\n\n> I've already tried a target 1000 and the only thing it changes\n> comparing to the current 100 (default) is instead of 2404 rows it says\n> 240 rows, but the plan remaining the same..\n\nTry both of these things\n* REINDEX on the index being used in the query, then re-EXPLAIN\n* enable_hashjoin = off, then re-EXPLAIN\n\nYou should first attempt to get the same plan, then confirm it really is\nfaster before we worry why the optimizer hadn't picked that plan. \n\nWe already know that MySQL favors nested loop joins, so turning up a\nplan that on this occasion is actually better that way is in no way\nrepresentative of general performance. Does MySQL support hash joins?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 06 May 2009 22:23:55 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Simon,\n\nmay you explain why REINDEX may help here?.. - database was just\ncreated, data loaded, and then indexes were created + analyzed.. What\nmay change here after REINDEX?..\n\nWith hashjoin disabled was a good try!\nRunning this query \"as it\" from 1.50ms we move to 0.84ms now,\nand the plan is here:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=4562.83..4568.66 rows=2329 width=176) (actual\ntime=0.225..0.229 rows=20 loops=1)\n Sort Key: h.horder\n Sort Method: quicksort Memory: 30kB\n -> Merge Join (cost=4345.89..4432.58 rows=2329 width=176) (actual\ntime=0.056..0.205 rows=20 loops=1)\n Merge Cond: (s.ref = h.ref_stat)\n -> Index Scan using stat_ref_idx on stat s\n(cost=0.00..49.25 rows=1000 width=45) (actual time=0.012..0.079\nrows=193 loops=1)\n -> Sort (cost=4345.89..4351.72 rows=2329 width=135) (actual\ntime=0.041..0.043 rows=20 loops=1)\n Sort Key: h.ref_stat\n Sort Method: quicksort Memory: 30kB\n -> Index Scan using history_ref_idx on history h\n(cost=0.00..4215.64 rows=2329 width=135) (actual time=0.013..0.024\nrows=20 loops=1)\n Index Cond: (ref_object = '0000000001'::bpchar)\n Total runtime: 0.261 ms\n(12 rows)\n\nCuriously planner expect to run it in 0.26ms\n\nAny idea why planner is not choosing this plan from the beginning?..\nAny way to keep this plan without having a global or per sessions\nhashjoin disabled?..\n\nRgds,\n-Dimitri\n\n\nOn 5/6/09, Simon Riggs <[email protected]> wrote:\n>\n> On Wed, 2009-05-06 at 10:31 +0200, Dimitri wrote:\n>\n>> I've already tried a target 1000 and the only thing it changes\n>> comparing to the current 100 (default) is instead of 2404 rows it says\n>> 240 rows, but the plan remaining the same..\n>\n> Try both of these things\n> * REINDEX on the index being used in the query, then re-EXPLAIN\n> * enable_hashjoin = off, then re-EXPLAIN\n>\n> You should first attempt to get the same plan, then confirm it really is\n> faster before we worry why the optimizer hadn't picked that plan.\n>\n> We already know that MySQL favors nested loop joins, so turning up a\n> plan that on this occasion is actually better that way is in no way\n> representative of general performance. Does MySQL support hash joins?\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Thu, 7 May 2009 10:20:46 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "The problem with \"gprof\" - it'll profile all stuff from the beginning\nto the end, and a lot of function executions unrelated to this query\nwill be profiled...\n\nAs well when you look on profiling technology - all such kind of\nsolutions are based on the system clock frequency and have their\nlimits on time resolution. On my system this limit is 0.5ms, and it's\ntoo big comparing to the query execution time :-)\n\nSo, what I've done - I changed little bit a reference key criteria from\n= '0000000001' to < '0000000051', so instead of 20 rows I have 1000\nrows on output now, it's still slower than InnoDB (12ms vs 9ms), but\nat least may be profiled (well, we also probably moving far from the\nproblem as time may be spent mostly on the output traffic now, but\nI've tried anyway) - I've made a loop of 100 iterations of this query\nwhich is reading but not printing data. The total execution time of\nthis loop is 1200ms, and curiously under profiling was not really\nchanged. Profiler was able to catch 733ms of total execution time (if\nI understand well, all functions running faster than 0.5ms are remain\nun-profiled). The top profiler output is here:\n\nExcl. Incl. Name\nUser CPU User CPU\n sec. sec.\n0.733 0.733 <Total>\n0.103 0.103 memcpy\n0.045 0.045 slot_deform_tuple\n0.037 0.040 AllocSetAlloc\n0.021 0.021 AllocSetFree\n0.018 0.037 pfree\n0.018 0.059 appendBinaryStringInfo\n0.017 0.031 heap_fill_tuple\n0.017 0.017 _ndoprnt\n0.016 0.016 nocachegetattr\n0.015 0.065 heap_form_minimal_tuple\n0.015 0.382 ExecProcNode\n0.015 0.015 strlen\n0.014 0.037 ExecScanHashBucket\n0.014 0.299 printtup\n0.013 0.272 ExecHashJoin\n0.011 0.011 enlargeStringInfo\n0.011 0.086 index_getnext\n0.010 0.010 hash_any\n0.009 0.076 FunctionCall1\n0.009 0.037 MemoryContextAlloc\n0.008 0.008 LWLockAcquire\n0.007 0.069 pq_sendcountedtext\n0.007 0.035 ExecProject\n0.007 0.127 ExecScan\n...\n\nCuriously \"memcpy\" is in top. Don't know if it's impacted in many\ncases, but probably it make sense to see if it may be optimized, etc..\n\nRgds,\n-Dimitri\n\n\n\nOn 5/7/09, Euler Taveira de Oliveira <[email protected]> wrote:\n> Dimitri escreveu:\n>> BTW, is there already an integrated profiled within a code? or do I\n>> need external tools?..\n>>\n> Postgres provides support for profiling. Add --enable-profiling flag. Use\n> gprof to get the profile.\n>\n>\n> --\n> Euler Taveira de Oliveira\n> http://www.timbira.com/\n>\n", "msg_date": "Thu, 7 May 2009 10:41:26 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Thu, May 7, 2009 at 4:20 AM, Dimitri <[email protected]> wrote:\n> Hi Simon,\n>\n> may you explain why REINDEX may help here?.. - database was just\n> created, data loaded, and then indexes were created + analyzed.. What\n> may change here after REINDEX?..\n>\n> With hashjoin disabled was a good try!\n> Running this query \"as it\" from 1.50ms we move to 0.84ms now,\n> and the plan is here:\n>\n>                                                                      QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Sort  (cost=4562.83..4568.66 rows=2329 width=176) (actual\n> time=0.225..0.229 rows=20 loops=1)\n>   Sort Key: h.horder\n>   Sort Method:  quicksort  Memory: 30kB\n>   ->  Merge Join  (cost=4345.89..4432.58 rows=2329 width=176) (actual\n> time=0.056..0.205 rows=20 loops=1)\n>         Merge Cond: (s.ref = h.ref_stat)\n>         ->  Index Scan using stat_ref_idx on stat s\n> (cost=0.00..49.25 rows=1000 width=45) (actual time=0.012..0.079\n> rows=193 loops=1)\n>         ->  Sort  (cost=4345.89..4351.72 rows=2329 width=135) (actual\n> time=0.041..0.043 rows=20 loops=1)\n>               Sort Key: h.ref_stat\n>               Sort Method:  quicksort  Memory: 30kB\n>               ->  Index Scan using history_ref_idx on history h\n> (cost=0.00..4215.64 rows=2329 width=135) (actual time=0.013..0.024\n> rows=20 loops=1)\n>                     Index Cond: (ref_object = '0000000001'::bpchar)\n>  Total runtime: 0.261 ms\n> (12 rows)\n>\n> Curiously planner expect to run it in 0.26ms\n>\n> Any idea why planner is not choosing this plan from the beginning?..\n> Any way to keep this plan without having a global or per sessions\n> hashjoin disabled?..\n\ncan you work prepared statements into your app? turn off hash join,\nprepare the query, then turn it back on.\n\nmerlin\n", "msg_date": "Thu, 7 May 2009 07:34:55 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n\n> We already know that MySQL favors nested loop joins\n\n From what I read I thought that was the *only* type of join MySQL supports.\n\nThe big picture view here is that whether we run a short query in half a\nmillisecond versus two milliseconds is usually not really important. It could\nmatter if you're concerned with how many transactions/s you can run in a busy\nserver -- but that's not exactly the same thing and you should really measure\nthat in that case.\n\nIt would be nice if we were in the same ballpark as MySQL but we would only be\ninteresting in such optimizations if they don't come at the expense of\nscalability under more complex workloads.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Thu, 07 May 2009 12:58:50 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Thu, 2009-05-07 at 12:58 +0100, Gregory Stark wrote:\n\n> It would be nice if we were in the same ballpark as MySQL but we would only be\n> interesting in such optimizations if they don't come at the expense of\n> scalability under more complex workloads.\n\nIt doesn't appear there is a scalability issue here at all.\n\nPostgres can clearly do the same query in about the same time.\n\nWe just have a case where MySQL happens to optimise it well and Postgres\ndoesn't. Since we can trivially design cases that show the opposite I'm\nnot worried too much. \n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Thu, 07 May 2009 13:21:32 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri escribi�:\n\n> As well when you look on profiling technology - all such kind of\n> solutions are based on the system clock frequency and have their\n> limits on time resolution. On my system this limit is 0.5ms, and it's\n> too big comparing to the query execution time :-)\n> \n> So, what I've done - I changed little bit a reference key criteria from\n> = '0000000001' to < '0000000051', so instead of 20 rows I have 1000\n> rows on output now,\n\nAnother thing you can try is run the query several times (like 10000 or so).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 7 May 2009 11:26:32 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "I've simply restarted a full test with hashjoin OFF. Until 32\nconcurrent users things are going well. Then since 32 users response\ntime is jumping to 20ms, with 64 users it's higher again, and with 256\nusers reaching 700ms, so TPS is dropping from 5.000 to ~200..\n\nWith hashjoin ON it's not happening, and I'm reaching at least 11.000\nTPS on fully busy 32 cores.\n\nI should not use prepare/execute as the test conditions should remain \"generic\".\n\nAbout scalability issue - there is one on 8.3.7, because on 32 cores\nwith such kind of load it's using only 50% CPU and not outpassing\n6.000 TPS, while 8.4 uses 90% CPU and reaching 11.000 TPS..\n\nOn the same time while I'm comparing 8.3 and 8.4 - the response time\nis 2 times lower in 8.4, and seems to me the main gain for 8.4 is\nhere.\n\nI'll publish all details, just need a time :-)\n\nRgds,\n-Dimitri\n\nOn 5/7/09, Merlin Moncure <[email protected]> wrote:\n> On Thu, May 7, 2009 at 4:20 AM, Dimitri <[email protected]> wrote:\n>> Hi Simon,\n>>\n>> may you explain why REINDEX may help here?.. - database was just\n>> created, data loaded, and then indexes were created + analyzed.. What\n>> may change here after REINDEX?..\n>>\n>> With hashjoin disabled was a good try!\n>> Running this query \"as it\" from 1.50ms we move to 0.84ms now,\n>> and the plan is here:\n>>\n>> QUERY\n>> PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=4562.83..4568.66 rows=2329 width=176) (actual\n>> time=0.225..0.229 rows=20 loops=1)\n>> Sort Key: h.horder\n>> Sort Method: quicksort Memory: 30kB\n>> -> Merge Join (cost=4345.89..4432.58 rows=2329 width=176) (actual\n>> time=0.056..0.205 rows=20 loops=1)\n>> Merge Cond: (s.ref = h.ref_stat)\n>> -> Index Scan using stat_ref_idx on stat s\n>> (cost=0.00..49.25 rows=1000 width=45) (actual time=0.012..0.079\n>> rows=193 loops=1)\n>> -> Sort (cost=4345.89..4351.72 rows=2329 width=135) (actual\n>> time=0.041..0.043 rows=20 loops=1)\n>> Sort Key: h.ref_stat\n>> Sort Method: quicksort Memory: 30kB\n>> -> Index Scan using history_ref_idx on history h\n>> (cost=0.00..4215.64 rows=2329 width=135) (actual time=0.013..0.024\n>> rows=20 loops=1)\n>> Index Cond: (ref_object = '0000000001'::bpchar)\n>> Total runtime: 0.261 ms\n>> (12 rows)\n>>\n>> Curiously planner expect to run it in 0.26ms\n>>\n>> Any idea why planner is not choosing this plan from the beginning?..\n>> Any way to keep this plan without having a global or per sessions\n>> hashjoin disabled?..\n>\n> can you work prepared statements into your app? turn off hash join,\n> prepare the query, then turn it back on.\n>\n> merlin\n>\n", "msg_date": "Thu, 7 May 2009 20:36:56 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Thu, 2009-05-07 at 20:36 +0200, Dimitri wrote:\n\n> I've simply restarted a full test with hashjoin OFF. Until 32\n> concurrent users things are going well. Then since 32 users response\n> time is jumping to 20ms, with 64 users it's higher again, and with 256\n> users reaching 700ms, so TPS is dropping from 5.000 to ~200..\n> \n> With hashjoin ON it's not happening, and I'm reaching at least 11.000\n> TPS on fully busy 32 cores.\n\nMuch better to stick to the defaults. \n\nSounds like a problem worth investigating further, but not pro bono.\n\n> About scalability issue - there is one on 8.3.7, because on 32 cores\n> with such kind of load it's using only 50% CPU and not outpassing\n> 6.000 TPS, while 8.4 uses 90% CPU and reaching 11.000 TPS..\n\nYeh, small changes make a big difference. Thanks for the info.\n\nHow does MySQL perform?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Thu, 07 May 2009 20:32:35 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Folks, it's completely crazy, but here is what I found:\n\n- if HISTORY table is analyzed with target 1000 my SELECT response\ntime is jumping to 3ms, and the max throughput is limited to 6.000 TPS\n(it's what happenned to 8.3.7)\n\n-if HISTORY table is analyzed with target 5 - my SELECT response time\nis decreasing to 1.2ms (!) and then my max TPS level is ~12.000 !\nand CPU is used up to 95% even by 8.3.7 :-) and 8.4 performed better\njust because I left its analyze target to default 100 value.\n\nAnyone may explain me why analyze target may have so huge negative\nsecondary effect?..\n\nNext point: SCALABILITY ISSUE\n\nNow both 8.3.7 and 8.4 have similar performance levels, but 8.3.7 is\nalways slightly better comparing to 8.4, but well. The problem I have:\n - on 8 cores: ~5.000 TPS / 5.500 MAX\n - on 16 cores: ~10.000 TPS / 11.000 MAX\n - on 32 cores: ~10.500 TPS / 11.500 MAX\n\nWhat else may limit concurrent SELECTs here?..\n\nYes, forget, MySQL is reaching 17.500 TPS here.\n\nRgds,\n-Dimitri\n\nOn 5/7/09, Simon Riggs <[email protected]> wrote:\n>\n> On Thu, 2009-05-07 at 20:36 +0200, Dimitri wrote:\n>\n>> I've simply restarted a full test with hashjoin OFF. Until 32\n>> concurrent users things are going well. Then since 32 users response\n>> time is jumping to 20ms, with 64 users it's higher again, and with 256\n>> users reaching 700ms, so TPS is dropping from 5.000 to ~200..\n>>\n>> With hashjoin ON it's not happening, and I'm reaching at least 11.000\n>> TPS on fully busy 32 cores.\n>\n> Much better to stick to the defaults.\n>\n> Sounds like a problem worth investigating further, but not pro bono.\n>\n>> About scalability issue - there is one on 8.3.7, because on 32 cores\n>> with such kind of load it's using only 50% CPU and not outpassing\n>> 6.000 TPS, while 8.4 uses 90% CPU and reaching 11.000 TPS..\n>\n> Yeh, small changes make a big difference. Thanks for the info.\n>\n> How does MySQL perform?\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Mon, 11 May 2009 17:18:31 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri <[email protected]> writes:\n> Anyone may explain me why analyze target may have so huge negative\n> secondary effect?..\n\nIf these are simple queries, maybe what you're looking at is the\nincrease in planning time caused by having to process 10x as much\nstatistical data. Cranking statistics_target to the max just because\nyou can is not necessarily a good strategy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 2009 11:23:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.. " }, { "msg_contents": "Hi Tom,\n\nit was not willing :-)\nit just stayed so after various probes with a query plan.\n\nAnyway, on 8.4 the default target is 100, an just by move it to 5 I\nreached on 16cores 10.500 TPS instead of 8.000 initially. And I think\nyou have a good reason to keep it equal to 100 by default, isn't it?\n;-)\n\nAnd what about scalability on 32cores?..\nAny idea?\n\nRgds,\n-Dimitri\n\nOn 5/11/09, Tom Lane <[email protected]> wrote:\n> Dimitri <[email protected]> writes:\n>> Anyone may explain me why analyze target may have so huge negative\n>> secondary effect?..\n>\n> If these are simple queries, maybe what you're looking at is the\n> increase in planning time caused by having to process 10x as much\n> statistical data. Cranking statistics_target to the max just because\n> you can is not necessarily a good strategy.\n>\n> \t\t\tregards, tom lane\n>\n", "msg_date": "Mon, 11 May 2009 18:22:39 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Mon, May 11, 2009 at 11:18 AM, Dimitri <[email protected]> wrote:\n> Folks, it's completely crazy, but here is what I found:\n>\n> - if HISTORY table is analyzed with target 1000 my SELECT response\n> time is jumping to 3ms, and the max throughput is limited to 6.000 TPS\n> (it's what happenned to 8.3.7)\n>\n> -if HISTORY table is analyzed with target 5 - my SELECT response time\n> is decreasing to 1.2ms (!)  and then my max TPS level is ~12.000 !\n> and CPU is used up to 95% even by 8.3.7 :-)  and 8.4 performed better\n> just because I left its analyze target to default 100 value.\n>\n> Anyone may explain me why analyze target may have so huge negative\n> secondary effect?..\n>\n> Next point: SCALABILITY ISSUE\n>\n> Now both 8.3.7 and 8.4 have similar performance levels, but 8.3.7 is\n> always slightly better comparing to 8.4, but well. The problem I have:\n>   - on 8 cores: ~5.000 TPS  / 5.500 MAX\n>   - on 16 cores: ~10.000 TPS / 11.000 MAX\n>   - on  32 cores: ~10.500 TPS  / 11.500 MAX\n>\n> What else may limit concurrent SELECTs here?..\n>\n> Yes, forget, MySQL is reaching 17.500 TPS here.\n\nwhy aren't you preparing the query? mysql uses simple rule based\nplanner and postgresql has a statistics based planner. Our planner\nhas all kinds of advantages in various scenarios, but this is\ncompensated by slightly longer planning time in some cases. OTOH, you\nhave prepared queries to compensate this. (mysql also has prepared\nqueries, but the syntax is awkward and there is much less benefit to\nusing them).\n\nmerlin\n\nmerlin\n", "msg_date": "Mon, 11 May 2009 13:46:47 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Mon, 2009-05-11 at 17:18 +0200, Dimitri wrote:\n\n> Yes, forget, MySQL is reaching 17.500 TPS here.\n\nPlease share your measurements of MySQL scalability also.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Mon, 11 May 2009 19:26:54 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Mon, 2009-05-11 at 11:23 -0400, Tom Lane wrote:\n> Dimitri <[email protected]> writes:\n> > Anyone may explain me why analyze target may have so huge negative\n> > secondary effect?..\n> \n> If these are simple queries, maybe what you're looking at is the\n> increase in planning time caused by having to process 10x as much\n> statistical data. Cranking statistics_target to the max just because\n> you can is not necessarily a good strategy.\n\nstatistics_target effects tables, so we have problems if you have a mix\nof simple and complex queries. IMHO we need an explicit planner_effort\ncontrol, rather than the more arcane *_limit knobs which are effectively\nthe same thing, just harder to use in practice.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Mon, 11 May 2009 20:03:28 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "* Dimitri <[email protected]> [090511 11:18]:\n> Folks, it's completely crazy, but here is what I found:\n> \n> - if HISTORY table is analyzed with target 1000 my SELECT response\n> time is jumping to 3ms, and the max throughput is limited to 6.000 TPS\n> (it's what happenned to 8.3.7)\n> \n> -if HISTORY table is analyzed with target 5 - my SELECT response time\n> is decreasing to 1.2ms (!) and then my max TPS level is ~12.000 !\n> and CPU is used up to 95% even by 8.3.7 :-) and 8.4 performed better\n> just because I left its analyze target to default 100 value.\n> \n> Anyone may explain me why analyze target may have so huge negative\n> secondary effect?..\n\nIt's actually pretty straight forward.\n\nThe PostgreSQL query planner is a \"smart planner\". It takes into\nconsideration all the statistics available on the columns/tables,\nexpected outputs based on inputs, etc, to choose what it thinks will be\nthe best plan. The more data you have in statistics (the larger\nstatistics target you have), the more CPU time and longer it's going to\ntake to \"plan\" your queries. The tradeoff is hopefully better plans.\n\nBut, in your scenario, where you are hitting the database with the\nabsolute worst possible way to use PostgreSQL, with small, repeated,\nsimple queries, you're not getting the advantage of \"better\" plans. In\nyour case, you're throwing absolutely simple queries at PG as fast as\nyou can, and for each query, PostgreSQL has to:\n\n1) Parse the given \"query string\"\n2) Given the statistics available, plan the query and pick the best one\n3) Actually run the query.\n\nPart 2 is going to dominate the CPU time in your tests, more so the more\nstatistics it has to evaluate, and unless the data has to come from the\ndisks (i.e. not in shared buffers or cache) is thus going to dominate the\ntime before you get your results. More statistics means more time\nneeded to do the planning/picking of the query.\n\nIf you were to use prepared statements, the cost of #1 and #2 is done\nonce, and then every time you throw a new execution of the query to\nPostgreSQL, you get to just do #3, the easy quick part, especially for\nsmall simple queries where all the data is in shared buffers or the cache.\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Mon, 11 May 2009 15:46:15 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Simon,\n\nit's too early yet to speak about MySQL scalability... :-)\nit's only since few months there is *no more* regression on MySQL\nperformance while moving from 8 to 16 cores. But looking how quickly\nit's progressing now things may change very quickly :-)\n\nFor the moment on my tests it gives:\n - on 8 cores: 14.000 TPS\n - on 16 cores: 17.500 TPS\n - on 32 cores: 15.000 TPS (regression)\n\nRgds,\n-Dimitri\n\nOn 5/11/09, Simon Riggs <[email protected]> wrote:\n>\n> On Mon, 2009-05-11 at 17:18 +0200, Dimitri wrote:\n>\n>> Yes, forget, MySQL is reaching 17.500 TPS here.\n>\n> Please share your measurements of MySQL scalability also.\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Tue, 12 May 2009 00:30:43 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Aidan,\n\nthanks a lot for this detailed summary!\n\nSo, why I don't use prepare here: let's say I'm testing the worst\nstress case :-) Imagine you have thousands of such kind of queries -\nyou cannot prepare all of them! :-) or you'll maybe prepare it once,\nbut as I showed previously in this thread prepare statement itself\ntakes 16ms, so for a single shot there is no gain! :-) Stressing with\nsuch kind of short and simple queries (and again, they have joins, it\nmay be even more simple :-)) will give me a result to show with\nguarantee my worst case - I know then if I have to deploy a bombarding\nOLTP-like application my database engine will be able to keep such\nworkload, and if I have performance problems they are inside of\napplication! :-) (well, it's very simplistic, but it's not far from\nthe truth :-))\n\nNow, as you see from your explanation, the Part #2 is the most\ndominant - so why instead to blame this query not to implement a QUERY\nPLANNER CACHE??? - in way if any *similar* query is recognized by\nparser we simply *reuse* the same plan?..\n\nRgds,\n-Dimitri\n\n\nOn 5/11/09, Aidan Van Dyk <[email protected]> wrote:\n> * Dimitri <[email protected]> [090511 11:18]:\n>> Folks, it's completely crazy, but here is what I found:\n>>\n>> - if HISTORY table is analyzed with target 1000 my SELECT response\n>> time is jumping to 3ms, and the max throughput is limited to 6.000 TPS\n>> (it's what happenned to 8.3.7)\n>>\n>> -if HISTORY table is analyzed with target 5 - my SELECT response time\n>> is decreasing to 1.2ms (!) and then my max TPS level is ~12.000 !\n>> and CPU is used up to 95% even by 8.3.7 :-) and 8.4 performed better\n>> just because I left its analyze target to default 100 value.\n>>\n>> Anyone may explain me why analyze target may have so huge negative\n>> secondary effect?..\n>\n> It's actually pretty straight forward.\n>\n> The PostgreSQL query planner is a \"smart planner\". It takes into\n> consideration all the statistics available on the columns/tables,\n> expected outputs based on inputs, etc, to choose what it thinks will be\n> the best plan. The more data you have in statistics (the larger\n> statistics target you have), the more CPU time and longer it's going to\n> take to \"plan\" your queries. The tradeoff is hopefully better plans.\n>\n> But, in your scenario, where you are hitting the database with the\n> absolute worst possible way to use PostgreSQL, with small, repeated,\n> simple queries, you're not getting the advantage of \"better\" plans. In\n> your case, you're throwing absolutely simple queries at PG as fast as\n> you can, and for each query, PostgreSQL has to:\n>\n> 1) Parse the given \"query string\"\n> 2) Given the statistics available, plan the query and pick the best one\n> 3) Actually run the query.\n>\n> Part 2 is going to dominate the CPU time in your tests, more so the more\n> statistics it has to evaluate, and unless the data has to come from the\n> disks (i.e. not in shared buffers or cache) is thus going to dominate the\n> time before you get your results. More statistics means more time\n> needed to do the planning/picking of the query.\n>\n> If you were to use prepared statements, the cost of #1 and #2 is done\n> once, and then every time you throw a new execution of the query to\n> PostgreSQL, you get to just do #3, the easy quick part, especially for\n> small simple queries where all the data is in shared buffers or the cache.\n>\n> a.\n>\n> --\n> Aidan Van Dyk Create like a god,\n> [email protected] command like a king,\n> http://www.highrise.ca/ work like a slave.\n>\n", "msg_date": "Tue, 12 May 2009 00:46:53 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri escribi�:\n> Hi Aidan,\n> \n> thanks a lot for this detailed summary!\n> \n> So, why I don't use prepare here: let's say I'm testing the worst\n> stress case :-) Imagine you have thousands of such kind of queries -\n> you cannot prepare all of them! :-)\n\nThousands? Surely there'll be a dozen or three of most common queries,\nto which you pass different parameters. You can prepare thoseu\n\n> Now, as you see from your explanation, the Part #2 is the most\n> dominant - so why instead to blame this query not to implement a QUERY\n> PLANNER CACHE??? - in way if any *similar* query is recognized by\n> parser we simply *reuse* the same plan?..\n\nThis has been discussed in the past, but it turns out that a real\nimplementation is a lot harder than it seems.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 11 May 2009 18:54:29 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> Now, as you see from your explanation, the Part #2 is the most\n> dominant - so why instead to blame this query not to implement a QUERY\n> PLANNER CACHE??? - in way if any *similar* query is recognized by\n> parser we simply *reuse* the same plan?..\n\nAt least in JDBC, there's several open source prepared statement cache \nimplementations out there that people use. I don't know about other \nclient libraries, but it certainly is possible to do in the client.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 12 May 2009 08:11:40 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": ">> So, why I don't use prepare here: let's say I'm testing the worst\n>> stress case :-) Imagine you have thousands of such kind of queries -\n>> you cannot prepare all of them! :-)\n>\n> Thousands? Surely there'll be a dozen or three of most common queries,\n> to which you pass different parameters. You can prepare thoseu\n\nOk, and if each client just connect to the database, execute each kind\nof query just *once* and then disconnect?.. - cost of prepare will\nkill performance here if it's not reused at least 10 times within the\nsame session.\n\nWell, I know, we always can do better, and even use stored procedures,\netc. etc.\n\n\n>\n>> Now, as you see from your explanation, the Part #2 is the most\n>> dominant - so why instead to blame this query not to implement a QUERY\n>> PLANNER CACHE??? - in way if any *similar* query is recognized by\n>> parser we simply *reuse* the same plan?..\n>\n> This has been discussed in the past, but it turns out that a real\n> implementation is a lot harder than it seems.\n\nOk. If I remember well, Oracle have it and it helps a lot, but for\nsure it's not easy to implement..\n\nRgds,\n-Dimitri\n", "msg_date": "Tue, 12 May 2009 09:29:08 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Nice to know. But again, if this cache is kept only on the client side\nit'll be always lost on disconnect. And if clients are \"short-lived\"\nit'll not help.\n\nBTW, is there an option to say \"do execution plan as simple as\npossible\"? If you're sure about your data and your indexes - don't\nneed to spend so much time.\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Heikki Linnakangas <[email protected]> wrote:\n> Dimitri wrote:\n>> Now, as you see from your explanation, the Part #2 is the most\n>> dominant - so why instead to blame this query not to implement a QUERY\n>> PLANNER CACHE??? - in way if any *similar* query is recognized by\n>> parser we simply *reuse* the same plan?..\n>\n> At least in JDBC, there's several open source prepared statement cache\n> implementations out there that people use. I don't know about other\n> client libraries, but it certainly is possible to do in the client.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n", "msg_date": "Tue, 12 May 2009 09:36:31 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi,\n\nOn 05/12/2009 12:46 AM, Dimitri wrote:\n> So, why I don't use prepare here: let's say I'm testing the worst\n> stress case :-) Imagine you have thousands of such kind of queries -\n> you cannot prepare all of them! :-) or you'll maybe prepare it once,\n> but as I showed previously in this thread prepare statement itself\n> takes 16ms, so for a single shot there is no gain! :-)\nI have a hard time imaging a high throughput OLTP workload with that \nmany different queries ;-)\n\nNaturally it would still be nice to be good in this not optimal workload...\n\nAndres\n", "msg_date": "Tue, 12 May 2009 09:43:13 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi,\n\nDimitri <[email protected]> writes:\n\n>>> So, why I don't use prepare here: let's say I'm testing the worst\n>>> stress case :-) Imagine you have thousands of such kind of queries -\n>>> you cannot prepare all of them! :-)\n>>\n>> Thousands? Surely there'll be a dozen or three of most common queries,\n>> to which you pass different parameters. You can prepare thoseu\n>\n> Ok, and if each client just connect to the database, execute each kind\n> of query just *once* and then disconnect?.. - cost of prepare will\n> kill performance here if it's not reused at least 10 times within the\n> same session.\n\nIn a scenario which looks like this one, what I'm doing is using\npgbouncer transaction pooling. Now a new connection from client can be\nserved by an existing backend, which already has prepared your\nstatement.\n\nSo you first SELECT name FROM pg_prepared_statements; to know if you\nhave to PREPARE or just EXECUTE, and you not only maintain much less\nrunning backends, lower fork() calls, but also benefit fully from\npreparing the statements even when you EXECUTE once per client\nconnection.\n\n> Well, I know, we always can do better, and even use stored procedures,\n> etc. etc.\n\nPlain SQL stored procedure will prevent PostgreSQL to prepare your\nqueries, only PLpgSQL functions will force transparent plan caching. But\ncalling this PL will cost about 1ms per call in my tests, so it's not a\ngood solution.\n\nIt's possible to go as far as providing your own PostgreSQL C module\nwhere you PREPARE at _PG_init() time and EXECUTE in a SQL callable\nfunction, coupled with pgbouncer it should max out the perfs. But maybe\nyou're not willing to go this far.\n\nAnyway, is hammering the server with always the same query your real\nneed or just a simplified test-case? If the former, you'll see there are\ngood ways to theorically obtain better perfs than what you're currently\nreaching, if the latter I urge you to consider some better benchmarking\ntools, such as playr or tsung.\n\n https://area51.myyearbook.com/trac.cgi/wiki/Playr\n http://tsung.erlang-projects.org/\n http://pgfouine.projects.postgresql.org/tsung.html\n http://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php\n\nRegards,\n-- \ndim\n", "msg_date": "Tue, 12 May 2009 10:44:09 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Folks, before you start to think \"what a dumb guy doing a dumb thing\" :-))\nI'll explain you few details:\n\nit's for more than 10 years I'm using a db_STRESS kit\n(http://dimitrik.free.fr/db_STRESS.html) to check databases\nperformance and scalability. Until now I was very happy with results\nit gave me as it stress very well each database engine internals an\nput on light some things I should probably skip on other workloads.\nWhat do you want, with a time the \"fast\" query executed before in\n500ms now runs within 1-2ms - not only hardware was improved but also\ndatabase engines increased their performance a lot! :-))\n\nIn 2007 I've published the first public results with PostgreSQL, and\nit was 2 times faster on that time comparing to MySQL\n(http://dimitrik.free.fr/db_STRESS_BMK_Part1.html)\n\nLast month for the launching of MySQL 5.4 I've done a long series of\ntests and at the end for my curiosity I've executed the same load\nagainst PostgreSQL 8.3.7 to see if MySQL is more close now. For my big\nsurprise, MySQL was faster! As well observations on PG processing\nbring me a lot of questions - I supposed something was abnormal on PG\nside, but I did not have too much time to understand what it was\nexactly (http://dimitrik.free.fr/db_STRESS_MySQL_540_and_others_Apr2009.html#note_5443)\n\nWhat I'm trying to do now is to understand what exactly is the problem.\n\nWhat I discovered so far with all your help:\n - the impact of a planner\n - the impact of the analyze target\n - the impact of prepare / execute\n - scalability limit on 32 cores\n\nI'll also try to adapt prepare/execute solution to see how much it\nimproves performance and/or scalability.\n\nAs well helping from the other thread I was able to improve a lot the\nTPS stability on read+write workload! :-)\n\nAny other comments are welcome!\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Dimitri Fontaine <[email protected]> wrote:\n> Hi,\n>\n> Dimitri <[email protected]> writes:\n>\n>>>> So, why I don't use prepare here: let's say I'm testing the worst\n>>>> stress case :-) Imagine you have thousands of such kind of queries -\n>>>> you cannot prepare all of them! :-)\n>>>\n>>> Thousands? Surely there'll be a dozen or three of most common queries,\n>>> to which you pass different parameters. You can prepare thoseu\n>>\n>> Ok, and if each client just connect to the database, execute each kind\n>> of query just *once* and then disconnect?.. - cost of prepare will\n>> kill performance here if it's not reused at least 10 times within the\n>> same session.\n>\n> In a scenario which looks like this one, what I'm doing is using\n> pgbouncer transaction pooling. Now a new connection from client can be\n> served by an existing backend, which already has prepared your\n> statement.\n>\n> So you first SELECT name FROM pg_prepared_statements; to know if you\n> have to PREPARE or just EXECUTE, and you not only maintain much less\n> running backends, lower fork() calls, but also benefit fully from\n> preparing the statements even when you EXECUTE once per client\n> connection.\n>\n>> Well, I know, we always can do better, and even use stored procedures,\n>> etc. etc.\n>\n> Plain SQL stored procedure will prevent PostgreSQL to prepare your\n> queries, only PLpgSQL functions will force transparent plan caching. But\n> calling this PL will cost about 1ms per call in my tests, so it's not a\n> good solution.\n>\n> It's possible to go as far as providing your own PostgreSQL C module\n> where you PREPARE at _PG_init() time and EXECUTE in a SQL callable\n> function, coupled with pgbouncer it should max out the perfs. But maybe\n> you're not willing to go this far.\n>\n> Anyway, is hammering the server with always the same query your real\n> need or just a simplified test-case? If the former, you'll see there are\n> good ways to theorically obtain better perfs than what you're currently\n> reaching, if the latter I urge you to consider some better benchmarking\n> tools, such as playr or tsung.\n>\n> https://area51.myyearbook.com/trac.cgi/wiki/Playr\n> http://tsung.erlang-projects.org/\n> http://pgfouine.projects.postgresql.org/tsung.html\n> http://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php\n>\n> Regards,\n> --\n> dim\n>\n", "msg_date": "Tue, 12 May 2009 12:19:05 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-12 at 12:19 +0200, Dimitri wrote:\n\n> For my big surprise, MySQL was faster!\n\nOurs too.\n\n** I bet you $1000 that I can improve the performance of your benchmark\nresults with PostgreSQL. You give me $1000 up-front and if I can't\nimprove your high end numbers I'll give you $2000 back. Either way, you\nname me and link to me from your blog. Assuming you re-run the tests as\nrequested and give me reasonable access to info and measurements. **\n\nI note your blog identifies you as a Sun employee. Is that correct? If\nyou do not give us the opportunity to improve upon the results then\nreasonable observers might be persuaded you did not wish to show\nPostgreSQL in its best light. You up for it?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 12 May 2009 11:48:59 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Wow, Simon! :-))\n\nyes, I'm working in Sun Benchmark Center :-))\n(I'm not using my Sun email on public lists only to avid a spam)\n\nand as came here and asking questions it's probably proving my\nintentions to show PostgreSQL in its best light, no?.. - I never liked\n\"not honest\" comparisons :-))\n\nRegarding your bet: from a very young age I learned a one thing - you\ntake any 2 person who betting for any reason - you'll find in them one\nidiot and one bastard :-)) idiot - because betting while missing\nknowledge, and bastard - because knowing the truth is not honset to\nget a profit from idiots :-)) That's why I never betting in my life,\nbut every time telling the same story in such situation... Did you\nlike it? ;-))\n\nHowever, no problem to give you a credit as well to all pg-perf list\nas it provides a very valuable help! :-))\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Simon Riggs <[email protected]> wrote:\n>\n> On Tue, 2009-05-12 at 12:19 +0200, Dimitri wrote:\n>\n>> For my big surprise, MySQL was faster!\n>\n> Ours too.\n>\n> ** I bet you $1000 that I can improve the performance of your benchmark\n> results with PostgreSQL. You give me $1000 up-front and if I can't\n> improve your high end numbers I'll give you $2000 back. Either way, you\n> name me and link to me from your blog. Assuming you re-run the tests as\n> requested and give me reasonable access to info and measurements. **\n>\n> I note your blog identifies you as a Sun employee. Is that correct? If\n> you do not give us the opportunity to improve upon the results then\n> reasonable observers might be persuaded you did not wish to show\n> PostgreSQL in its best light. You up for it?\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Tue, 12 May 2009 13:16:08 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-12 at 13:16 +0200, Dimitri wrote:\n\n> Wow, Simon! :-))\n> \n> yes, I'm working in Sun Benchmark Center :-))\n> (I'm not using my Sun email on public lists only to avid a spam)\n> \n> and as came here and asking questions it's probably proving my\n> intentions to show PostgreSQL in its best light, no?.. - I never liked\n> \"not honest\" comparisons :-))\n> \n> Regarding your bet: from a very young age I learned a one thing - you\n> take any 2 person who betting for any reason - you'll find in them one\n> idiot and one bastard :-)) idiot - because betting while missing\n> knowledge, and bastard - because knowing the truth is not honset to\n> get a profit from idiots :-)) That's why I never betting in my life,\n> but every time telling the same story in such situation... Did you\n> like it? ;-))\n\nNo, but I asked for it, so we're even. ;-)\n\nLet's work on the benchmark.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 12 May 2009 12:33:10 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> What I discovered so far with all your help:\n> - the impact of a planner\n> - the impact of the analyze target\n> - the impact of prepare / execute\n> - scalability limit on 32 cores\n\nYou've received good advice on how to minimize the impact of the first \nthree points, and using those techniques should bring a benefit. But I'm \npretty surprised by the bad scalability you're seeing and no-one seems \nto have a good idea on where that limit is coming from. At a quick \nglance, I don't see any inherent bottlenecks in the schema and workload.\n\nIf you could analyze where the bottleneck is with multiple cores, that \nwould be great. With something like oprofile, it should be possible to \nfigure out where the time is spent.\n\nMy first guess would be the WALInsertLock: writing to WAL is protected \nby that and it an become a bottleneck with lots of small \nUPDATE/DELETE/INSERT transactions. But a profile would be required to \nverify that.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 12 May 2009 14:35:32 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-12 at 12:19 +0200, Dimitri wrote:\n\n> What I'm trying to do now is to understand what exactly is the\n> problem.\n\nYou're running with 1600 users, which is above the scalability limit\nuncovered (by Sun...) during earlier benchmarking. The scalability\nissues are understood but currently considered above the\nreasonable-setting limit and so nobody has been inclined to improve\nmatters.\n\nYou should use a connection concentrator to reduce the number of\nsessions down to say 400.\n\nYou're WAL buffers setting is also too low and you will be experiencing\ncontention on the WALWriteLock. Increase wal_buffers to about x8 where\nyou have it now.\n\nYou can move pg_xlog to its own set of drives.\n\nSet checkpoint_completion_target to 0.95.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 12 May 2009 12:49:41 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "For the moment I'm even not considering any scalability issues on the\nRead+Write workload - it may always be related to the storage box, and\nstorage latency or controller/cache efficiency may play a lot.\n\nAs problem I'm considering a scalability issue on Read-Only workload -\nonly selects, no disk access, and if on move from 8 to 16 cores we\ngain near 100%, on move from 16 to 32 cores it's only 10%...\n\nI think I have to replay Read-Only with prepare/execute and check how\nmuch it'll help (don't know if there are some internal locking used\nwhen a planner is involved)..\n\nAnd yes, I'll try to profile on 32 cores, it makes sense.\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Heikki Linnakangas <[email protected]> wrote:\n> Dimitri wrote:\n>> What I discovered so far with all your help:\n>> - the impact of a planner\n>> - the impact of the analyze target\n>> - the impact of prepare / execute\n>> - scalability limit on 32 cores\n>\n> You've received good advice on how to minimize the impact of the first\n> three points, and using those techniques should bring a benefit. But I'm\n> pretty surprised by the bad scalability you're seeing and no-one seems\n> to have a good idea on where that limit is coming from. At a quick\n> glance, I don't see any inherent bottlenecks in the schema and workload.\n>\n> If you could analyze where the bottleneck is with multiple cores, that\n> would be great. With something like oprofile, it should be possible to\n> figure out where the time is spent.\n>\n> My first guess would be the WALInsertLock: writing to WAL is protected\n> by that and it an become a bottleneck with lots of small\n> UPDATE/DELETE/INSERT transactions. But a profile would be required to\n> verify that.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n", "msg_date": "Tue, 12 May 2009 14:28:50 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> Folks, before you start to think \"what a dumb guy doing a dumb thing\" :-))\n> I'll explain you few details:\n> \n> it's for more than 10 years I'm using a db_STRESS kit\n> (http://dimitrik.free.fr/db_STRESS.html) to check databases\n> performance and scalability. Until now I was very happy with results\n> it gave me as it stress very well each database engine internals an\n> put on light some things I should probably skip on other workloads.\n> What do you want, with a time the \"fast\" query executed before in\n> 500ms now runs within 1-2ms - not only hardware was improved but also\n> database engines increased their performance a lot! :-))\n\nI was attempting to look into that \"benchmark\" kit a bit but I find the \ninformation on that page a bit lacking :( a few notices:\n\n* is the sourcecode for the benchmark actually available? the \"kit\" \nseems to contain a few precompiled binaries and some source/headfiles \nbut there are no building instructions, no makefile or even a README \nwhich makes it really hard to verify exactly what the benchmark is doing \nor if the benchmark client might actually be the problem here.\n\n* there is very little information on how the toolkit talks to the \ndatabase - some of the binaries seem to contain a static copy of libpq \nor such?\n\n* how many queries per session is the toolkit actually using - some \nearlier comments seem to imply you are doing a connect/disconnect cycle \nfor every query ist that actually true?\n\n\nStefan\n", "msg_date": "Tue, 12 May 2009 14:36:22 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Wait wait, currently I'm playing the \"stress scenario\", so there are\nonly 256 sessions max, but thing time is zero (full stress). Scenario\nwith 1600 users is to test how database is solid just to keep a huge\namount of users, but doing only one transaction per second (very low\nglobal TPS comparing to what database is able to do, but it's testing\nhow well its internals working to manage the user sessions).\n\nI did not plan to do 1600 users test this time (all depends on time :-))\n\nSo, do I need to increase WAL buffers for 256 users?\n\nMy LOG and DATA are placed on separated storage LUNs and controllers\nfrom the beginning.\n\nI've changed the default 0.5 checkpoint_completion_target to 0.8 now,\nshould I go until 0.95 ?..\n\nAlso, to avoid TPS \"waves\" and bring stability on Read+Write workload\nI followed advices from a parallel thread:\n\nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4.0\nshared_buffers = 1024MB\n\nI've also tried shared_buffers=256MB as it was advised, but then\nRead-Only workload decreasing performance as PG self caching helps\nanyway.\n\nAlso, checkpoint_timeout is 30s now, and of course a huge difference\ncame with moving default_statistics_target to 5 ! -but this one I\nfound myself :-))\n\nProbably checkpoint_timeout may be bigger now with the current\nsettings? - the goal here is to keep Read+Write TPS as stable as\npossible and also avoid a long recovery in case of\nsystem/database/other crash (in theory).\n\nRgds,\n-Dimitri\n\n\nOn 5/12/09, Simon Riggs <[email protected]> wrote:\n>\n> On Tue, 2009-05-12 at 12:19 +0200, Dimitri wrote:\n>\n>> What I'm trying to do now is to understand what exactly is the\n>> problem.\n>\n> You're running with 1600 users, which is above the scalability limit\n> uncovered (by Sun...) during earlier benchmarking. The scalability\n> issues are understood but currently considered above the\n> reasonable-setting limit and so nobody has been inclined to improve\n> matters.\n>\n> You should use a connection concentrator to reduce the number of\n> sessions down to say 400.\n>\n> You're WAL buffers setting is also too low and you will be experiencing\n> contention on the WALWriteLock. Increase wal_buffers to about x8 where\n> you have it now.\n>\n> You can move pg_xlog to its own set of drives.\n>\n> Set checkpoint_completion_target to 0.95.\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Tue, 12 May 2009 14:59:02 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Stefan,\n\nsorry, I did not have a time to bring all details into the toolkit -\nbut at least I published it instead to tell a \"nice story\" about :-)\n\nThe client process is a binary compiled with libpq. Client is\ninterpreting a scenario script and publish via SHM a time spent on\neach SQL request. I did not publish sources yet as it'll also require\nto explain how to compile them :-)) So for the moment it's shipped as\na freeware, but with time everything will be available (BTW, you're\nthe first who asking for sources (well, except IBM guys who asked to\nget it on POWER boxes, but it's another story :-))\n\nWhat is good is each client is publishing *live* its internal stats an\nwe're able to get live data and follow any kind of \"waves\" in\nperformance. Each session is a single process, so there is no\ncontention between clients as you may see on some other tools. The\ncurrent scenario script contains 2 selects (representing a Read\ntransaction) and delete/insert/update (representing Write\ntransaction). According a start parameters each client executing a\ngiven number Reads per Write. It's connecting on the beginning and\ndisconnecting at the end of the test.\n\nIt's also possible to extend it to do other queries, or simply give to\neach client a different scenario script - what's important is to able\nto collect then its stats live to understand what's going wrong (if\nany)..\n\nI'm planning to extend it and give an easy way to run it against any\ndatabase schema, it's only question of time..\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Stefan Kaltenbrunner <[email protected]> wrote:\n> Dimitri wrote:\n>> Folks, before you start to think \"what a dumb guy doing a dumb thing\" :-))\n>> I'll explain you few details:\n>>\n>> it's for more than 10 years I'm using a db_STRESS kit\n>> (http://dimitrik.free.fr/db_STRESS.html) to check databases\n>> performance and scalability. Until now I was very happy with results\n>> it gave me as it stress very well each database engine internals an\n>> put on light some things I should probably skip on other workloads.\n>> What do you want, with a time the \"fast\" query executed before in\n>> 500ms now runs within 1-2ms - not only hardware was improved but also\n>> database engines increased their performance a lot! :-))\n>\n> I was attempting to look into that \"benchmark\" kit a bit but I find the\n> information on that page a bit lacking :( a few notices:\n>\n> * is the sourcecode for the benchmark actually available? the \"kit\"\n> seems to contain a few precompiled binaries and some source/headfiles\n> but there are no building instructions, no makefile or even a README\n> which makes it really hard to verify exactly what the benchmark is doing\n> or if the benchmark client might actually be the problem here.\n>\n> * there is very little information on how the toolkit talks to the\n> database - some of the binaries seem to contain a static copy of libpq\n> or such?\n>\n> * how many queries per session is the toolkit actually using - some\n> earlier comments seem to imply you are doing a connect/disconnect cycle\n> for every query ist that actually true?\n>\n>\n> Stefan\n>\n", "msg_date": "Tue, 12 May 2009 15:25:38 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 12, 2009 at 8:59 AM, Dimitri <[email protected]> wrote:\n> Wait wait, currently I'm playing the \"stress scenario\", so there are\n> only 256 sessions max, but thing time is zero (full stress). Scenario\n> with 1600 users is to test how database is solid just to keep a huge\n> amount of users, but doing only one transaction per second (very low\n> global TPS comparing to what database is able to do, but it's testing\n> how well its internals working to manage the user sessions).\n\nDidn't we beat this to death in mid-March on this very same list?\nLast time I think it was Jignesh Shah. AIUI, it's a well-known fact\nthat PostgreSQL doesn't do very well at this kind of workload unless\nyou use a connection pooler.\n\n*goes and checks the archives* Sure enough, 116 emails under the\nsubject line \"Proposal of tunable fix for scalability of 8.4\".\n\nSo, if your goal is to find a scenario under which PostgreSQL performs\nas badly as possible, congratulations - you've discovered the same\ncase that we already knew about. Obviously it would be nice to\nimprove it, but IIRC so far no one has had any very good ideas on how\nto do that. If this example mimics a real-world workload that you\ncare about, and if using a connection pooler is just not a realistic\noption in that scenario for whatever reason, then you'd be better off\nworking on how to fix it than on measuring it, because it seems to me\nwe already know it's got problems, per previous discussions.\n\n...Robert\n", "msg_date": "Tue, 12 May 2009 10:21:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri wrote:\n> Hi Stefan,\n> \n> sorry, I did not have a time to bring all details into the toolkit -\n> but at least I published it instead to tell a \"nice story\" about :-)\n\nfair point and appreciated. But it seems important that benchmarking \nresults can be verified by others as well...\n\n> \n> The client process is a binary compiled with libpq. Client is\n> interpreting a scenario script and publish via SHM a time spent on\n> each SQL request. I did not publish sources yet as it'll also require\n> to explain how to compile them :-)) So for the moment it's shipped as\n> a freeware, but with time everything will be available (BTW, you're\n> the first who asking for sources (well, except IBM guys who asked to\n> get it on POWER boxes, but it's another story :-))\n\nwell there is no licence tag(or a copyright notice) or anything als \nassociated with the download which makes it a bit harder than it really \nneeds to be.\nThe reason why I was actually looking for the source is that all my \navailable benchmark platforms are none of the ones you are providing \nbinaries for which kinda reduces its usefulness.\n\n> \n> What is good is each client is publishing *live* its internal stats an\n> we're able to get live data and follow any kind of \"waves\" in\n> performance. Each session is a single process, so there is no\n> contention between clients as you may see on some other tools. The\n> current scenario script contains 2 selects (representing a Read\n> transaction) and delete/insert/update (representing Write\n> transaction). According a start parameters each client executing a\n> given number Reads per Write. It's connecting on the beginning and\n> disconnecting at the end of the test.\n\nwell I have seen clients getting bottlenecked internally (like wasting \nmore time in getting rid/absorbing of the actual result than it took the \nserver to generate the answer...).\nHow sure are you that your \"live publishing of data\" does not affect the \nbenchmark results(because it kinda generates an artifical think time) \nfor example?\nBut what I get from your answer is that you are basically doing one \nconnect/disconnect per client and the testcase you are talking about has \n256 clients?\n\n\nStefan\n", "msg_date": "Tue, 12 May 2009 16:34:30 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, 12 May 2009, Stefan Kaltenbrunner wrote:\n> But what I get from your answer is that you are basically doing one \n> connect/disconnect per client and the testcase you are talking about has 256 \n> clients?\n\nCorrect me if I'm wrong, but won't connect operations be all handled by a \nsingle thread - the parent postmaster? There's your scalability problem \nright there. Also, spawning a new backend process is an awful lot of \noverhead to run just one query.\n\nAs far as I can see, it's quite understandable for MySQL to perform better \nthan PostgreSQL in these circumstances, as it has a smaller simpler \nbackend to start up each time. If you really want to get a decent \nperformance out of Postgres, then use long-lived connections (which most \nreal-world use cases will do) and prepare your queries in advance with \nparameters.\n\nMatthew\n\n-- \n import oz.wizards.Magic;\n if (Magic.guessRight())... -- Computer Science Lecturer\n", "msg_date": "Tue, 12 May 2009 16:00:00 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-12 at 16:00 +0100, Matthew Wakeling wrote:\n> won't connect operations be all handled by a \n> single thread - the parent postmaster?\n\nNo, we spawn then authenticate. \n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 12 May 2009 16:04:48 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, 12 May 2009, Simon Riggs wrote:\n>> won't connect operations be all handled by a\n>> single thread - the parent postmaster?\n>\n> No, we spawn then authenticate.\n\nBut you still have a single thread doing the accept() and spawn. At some \npoint (maybe not now, but in the future) this could become a bottleneck \ngiven very short-lived connections.\n\nMatthew\n\n-- \n -. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-.\n ||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||\n |/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/\n ' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-'\n", "msg_date": "Tue, 12 May 2009 16:05:37 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Matthew Wakeling wrote:\n> On Tue, 12 May 2009, Simon Riggs wrote:\n>>> won't connect operations be all handled by a\n>>> single thread - the parent postmaster?\n>>\n>> No, we spawn then authenticate.\n> \n> But you still have a single thread doing the accept() and spawn. At some \n> point (maybe not now, but in the future) this could become a bottleneck \n> given very short-lived connections.\n\nwell the main cost is backend startup and that one is extremely \nexpensive (compared to the cost of a simple query and also depending on \nthe OS). We have more overhead there than other databases (most notably \nMySQL) hence what prompted my question on how the benchmark was operating.\nFor any kind of workloads that contain frequent connection \nestablishments one wants to use a connection pooler like pgbouncer(as \nsaid elsewhere in the thread already).\n\n\nStefan\n", "msg_date": "Tue, 12 May 2009 17:14:31 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Tue, 12 May 2009, Simon Riggs wrote:\n>> No, we spawn then authenticate.\n\n> But you still have a single thread doing the accept() and spawn. At some \n> point (maybe not now, but in the future) this could become a bottleneck \n> given very short-lived connections.\n\nMore to the point, each backend process is a pretty heavyweight object:\nit is a process, not a thread, and it's not going to be good for much\nuntil it's built up a reasonable amount of stuff in its private caches.\nI don't think the small number of cycles executed in the postmaster\nprocess amount to anything at all compared to the other overhead\ninvolved in getting a backend going.\n\nIn short: executing a single query per connection is going to suck,\nand there is not anything we are going to do about it except to tell\nyou to use a connection pooler.\n\nMySQL has a different architecture: thread per connection, and AFAIK\nwhatever caches it has are shared across threads. So a connection is a\nlighter-weight object for them; but there's no free lunch. They pay for\nit in having to tolerate locking/contention overhead on operations that\nfor us are backend-local.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 May 2009 11:18:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.. " }, { "msg_contents": "Robert, what I'm testing now is 256 users max. The workload is growing\nprogressively from 1, 2, 4, 8 ... to 256 users. Of course the Max\nthroughput is reached on the number of users equal to 2 * number of\ncores, but what's important for me here - database should continue to\nkeep the workload! - response time regressing, but the troughput\nshould remain near the same.\n\nSo, do I really need a pooler to keep 256 users working?? - I don't\nthink so, but please, correct me.\n\nBTW, I did not look to put PostgreSQL in bad conditions - the test is\nthe test, and as I said 2 years ago PostgreSQL outperformed MySQL on\nthe same test case, and there was nothing done within MySQL code to\nimprove it explicitly for db_STRESS.. And I'm staying pretty honest\nwhen I'm testing something.\n\nRgds,\n-Dimitri\n\n\nOn 5/12/09, Robert Haas <[email protected]> wrote:\n> On Tue, May 12, 2009 at 8:59 AM, Dimitri <[email protected]> wrote:\n>> Wait wait, currently I'm playing the \"stress scenario\", so there are\n>> only 256 sessions max, but thing time is zero (full stress). Scenario\n>> with 1600 users is to test how database is solid just to keep a huge\n>> amount of users, but doing only one transaction per second (very low\n>> global TPS comparing to what database is able to do, but it's testing\n>> how well its internals working to manage the user sessions).\n>\n> Didn't we beat this to death in mid-March on this very same list?\n> Last time I think it was Jignesh Shah. AIUI, it's a well-known fact\n> that PostgreSQL doesn't do very well at this kind of workload unless\n> you use a connection pooler.\n>\n> *goes and checks the archives* Sure enough, 116 emails under the\n> subject line \"Proposal of tunable fix for scalability of 8.4\".\n>\n> So, if your goal is to find a scenario under which PostgreSQL performs\n> as badly as possible, congratulations - you've discovered the same\n> case that we already knew about. Obviously it would be nice to\n> improve it, but IIRC so far no one has had any very good ideas on how\n> to do that. If this example mimics a real-world workload that you\n> care about, and if using a connection pooler is just not a realistic\n> option in that scenario for whatever reason, then you'd be better off\n> working on how to fix it than on measuring it, because it seems to me\n> we already know it's got problems, per previous discussions.\n>\n> ...Robert\n>\n", "msg_date": "Tue, 12 May 2009 17:22:31 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/12/09, Stefan Kaltenbrunner <[email protected]> wrote:\n> Dimitri wrote:\n>> Hi Stefan,\n>>\n>> sorry, I did not have a time to bring all details into the toolkit -\n>> but at least I published it instead to tell a \"nice story\" about :-)\n>\n> fair point and appreciated. But it seems important that benchmarking\n> results can be verified by others as well...\n\nuntil now there were only people running Solaris or Linux :-))\n\n>\n>>\n>> The client process is a binary compiled with libpq. Client is\n>> interpreting a scenario script and publish via SHM a time spent on\n>> each SQL request. I did not publish sources yet as it'll also require\n>> to explain how to compile them :-)) So for the moment it's shipped as\n>> a freeware, but with time everything will be available (BTW, you're\n>> the first who asking for sources (well, except IBM guys who asked to\n>> get it on POWER boxes, but it's another story :-))\n>\n> well there is no licence tag(or a copyright notice) or anything als\n> associated with the download which makes it a bit harder than it really\n> needs to be.\n> The reason why I was actually looking for the source is that all my\n> available benchmark platforms are none of the ones you are providing\n> binaries for which kinda reduces its usefulness.\n>\n\nagree, will improve this point\n\n>>\n>> What is good is each client is publishing *live* its internal stats an\n>> we're able to get live data and follow any kind of \"waves\" in\n>> performance. Each session is a single process, so there is no\n>> contention between clients as you may see on some other tools. The\n>> current scenario script contains 2 selects (representing a Read\n>> transaction) and delete/insert/update (representing Write\n>> transaction). According a start parameters each client executing a\n>> given number Reads per Write. It's connecting on the beginning and\n>> disconnecting at the end of the test.\n>\n> well I have seen clients getting bottlenecked internally (like wasting\n> more time in getting rid/absorbing of the actual result than it took the\n> server to generate the answer...).\n> How sure are you that your \"live publishing of data\" does not affect the\n> benchmark results(because it kinda generates an artifical think time)\n> for example?\n\nOn all my test tools client are publishing their data via shared\nmemory segment (ISM), all they do is just *incrementing* their current\nstats values and continuing their processing. Another dedicated\nprogram should be executed to print these stats - it's connecting to\nthe same SHM segment and printing a *difference* between values for\nthe current and the next interval. Let me know if you need more\ndetails.\n\n> But what I get from your answer is that you are basically doing one\n> connect/disconnect per client and the testcase you are talking about has\n> 256 clients?\n\nExactly, only one connect/disconnect per test, and number of clients\nis growing progressively from 1, 2, 4, 8, 16, .. to 256\n\nRgds,\n-Dimitri\n\n>\n>\n> Stefan\n>\n", "msg_date": "Tue, 12 May 2009 17:37:45 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri <[email protected]> wrote:\n \n> Of course the Max throughput is reached on the number of users equal\n> to 2 * number of cores\n \nI'd expect that when disk I/O is not a significant limiting factor,\nbut I've seen a \"sweet spot\" of (2 * cores) + (effective spindle\ncount) for loads involving a lot of random I/O.\n \n> So, do I really need a pooler to keep 256 users working??\n \nI have seen throughput fall above a certain point when I don't use a\nconnection pooler. With a connection pooler which queues requests\nwhen all connections are busy, you will see no throughput degradation\nas users of the pool are added. Our connection pool is in our\nframework, so I don't know whether pgbouncer queues requests. \n(Perhaps someone else can comment on that, and make another suggestion\nif it doesn't.)\n \n-Kevin\n", "msg_date": "Tue, 12 May 2009 10:46:37 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, 2009-05-12 at 17:22 +0200, Dimitri wrote:\n> Robert, what I'm testing now is 256 users max. The workload is growing\n> progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max\n> throughput is reached on the number of users equal to 2 * number of\n> cores, but what's important for me here - database should continue to\n> keep the workload! - response time regressing, but the troughput\n> should remain near the same.\n> \n> So, do I really need a pooler to keep 256 users working?? - I don't\n> think so, but please, correct me.\n\nIf they disconnect and reconnect yes. If they keep the connections live\nthen no. \n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Tue, 12 May 2009 09:07:59 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "No, they keep connections till the end of the test.\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Joshua D. Drake <[email protected]> wrote:\n> On Tue, 2009-05-12 at 17:22 +0200, Dimitri wrote:\n>> Robert, what I'm testing now is 256 users max. The workload is growing\n>> progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max\n>> throughput is reached on the number of users equal to 2 * number of\n>> cores, but what's important for me here - database should continue to\n>> keep the workload! - response time regressing, but the troughput\n>> should remain near the same.\n>>\n>> So, do I really need a pooler to keep 256 users working?? - I don't\n>> think so, but please, correct me.\n>\n> If they disconnect and reconnect yes. If they keep the connections live\n> then no.\n>\n> Joshua D. Drake\n>\n> --\n> PostgreSQL - XMPP: [email protected]\n> Consulting, Development, Support, Training\n> 503-667-4564 - http://www.commandprompt.com/\n> The PostgreSQL Company, serving since 1997\n>\n>\n", "msg_date": "Tue, 12 May 2009 18:16:59 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 12, 2009 at 11:22 AM, Dimitri <[email protected]> wrote:\n> Robert, what I'm testing now is 256 users max. The workload is growing\n> progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max\n> throughput is reached on the number of users equal to 2 * number of\n> cores, but what's important for me here - database should continue to\n> keep the workload! - response time regressing, but the troughput\n> should remain near the same.\n>\n> So, do I really need a pooler to keep 256 users working??  - I don't\n> think so, but please, correct me.\n\nNot an expert on this, but there has been a lot of discussion of the\nimportance of connection pooling in this space. Is MySQL still faster\nif you lower max_connections to a value that is closer to the number\nof users, like 400 rather than 2000?\n\n> BTW, I did not look to put PostgreSQL in bad conditions - the test is\n> the test, and as I said 2 years ago PostgreSQL outperformed MySQL on\n> the same test case, and there was nothing done within MySQL code to\n> improve it explicitly for db_STRESS.. And I'm staying pretty honest\n> when I'm testing something.\n\nYeah but it's not really clear what that something is. I believe you\nsaid upthread that PG 8.4 beta 1 is faster than PG 8.3.7, but PG 8.4\nbeta 1 is slower than MySQL 5.4 whereas PG 8.3.7 was faster than some\nolder version of MySQL. So PG got faster and MySQL got faster, but\nthey sped things up more than we did. If our performance were getting\nWORSE, I'd be worried about that, but the fact that they were able to\nmake more improvement on this particular case than we were doesn't\nexcite me very much. Sure, I'd love it if PG were even faster than it\nis, and if you have a suggested patch please send it in... or if you\nwant to profile it and send the results that would be great too. But\nI guess my point is that the case of a very large number of\nsimultaneous users with pauses-for-thought between queries has already\nbeen looked at in the very recent past in a way that's very similar to\nwhat you are doing (and by someone who works at the same company you\ndo, no less!) so I'm not quite sure why we're rehashing the issue.\n\n...Robert\n", "msg_date": "Tue, 12 May 2009 12:26:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 12, 2009 at 11:18 AM, Tom Lane <[email protected]> wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> On Tue, 12 May 2009, Simon Riggs wrote:\n>>> No, we spawn then authenticate.\n>\n>> But you still have a single thread doing the accept() and spawn. At some\n>> point (maybe not now, but in the future) this could become a bottleneck\n>> given very short-lived connections.\n>\n> More to the point, each backend process is a pretty heavyweight object:\n> it is a process, not a thread, and it's not going to be good for much\n> until it's built up a reasonable amount of stuff in its private caches.\n> I don't think the small number of cycles executed in the postmaster\n> process amount to anything at all compared to the other overhead\n> involved in getting a backend going.\n\nAIUI, whenever the connection pooler switches to serving a new client,\nit tells the PG backend to DISCARD ALL. But why couldn't we just\nimplement this same logic internally? IOW, when a client disconnects,\ninstead of having the backend exit immediately, have it perform the\nequivalent of DISCARD ALL and then stick around for a minute or two\nand, if a new connection request arrives within that time, have the\nold backend handle the new connection...\n\n(There is the problem of how to get the file descriptor returned by\nthe accept() call in the parent process down to the child... but I\nthink that at least on some UNIXen there is a way to pass an fd\nthrough a socket, or even dup it into another process by opening it\nfrom /proc/fd)\n\n...Robert\n", "msg_date": "Tue, 12 May 2009 12:32:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> AIUI, whenever the connection pooler switches to serving a new client,\n> it tells the PG backend to DISCARD ALL. But why couldn't we just\n> implement this same logic internally? IOW, when a client disconnects,\n> instead of having the backend exit immediately, have it perform the\n> equivalent of DISCARD ALL and then stick around for a minute or two\n> and, if a new connection request arrives within that time, have the\n> old backend handle the new connection...\n\nSee previous discussions. IIRC, there are two killer points:\n\n1. There is no (portable) way to pass the connection from the postmaster\nto another pre-existing process.\n\n2. You'd have to track which database, and probably which user, each\nsuch backend had been launched for; reconnecting a backend to a new\ndatabase is probably impractical and would certainly invalidate all\nthe caching.\n\nOverall it looked like way too much effort for way too little gain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 May 2009 12:49:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.. " }, { "msg_contents": "On MySQL there is no changes if I set the number of sessions in the\nconfig file to 400 or to 2000 - for 2000 it'll just allocate more\nmemory.\n\nAfter latest fix with default_statistics_target=5, version 8.3.7 is\nrunning as fast as 8.4, even 8.4 is little little bit slower.\n\nI understand your position with a pooler, but I also want you think\nabout idea that 128 cores system will become a commodity server very\nsoon, and to use these cores on their full power you'll need a\ndatabase engine capable to run 256 users without pooler, because a\npooler will not help you here anymore..\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Robert Haas <[email protected]> wrote:\n> On Tue, May 12, 2009 at 11:22 AM, Dimitri <[email protected]> wrote:\n>> Robert, what I'm testing now is 256 users max. The workload is growing\n>> progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max\n>> throughput is reached on the number of users equal to 2 * number of\n>> cores, but what's important for me here - database should continue to\n>> keep the workload! - response time regressing, but the troughput\n>> should remain near the same.\n>>\n>> So, do I really need a pooler to keep 256 users working?? - I don't\n>> think so, but please, correct me.\n>\n> Not an expert on this, but there has been a lot of discussion of the\n> importance of connection pooling in this space. Is MySQL still faster\n> if you lower max_connections to a value that is closer to the number\n> of users, like 400 rather than 2000?\n>\n>> BTW, I did not look to put PostgreSQL in bad conditions - the test is\n>> the test, and as I said 2 years ago PostgreSQL outperformed MySQL on\n>> the same test case, and there was nothing done within MySQL code to\n>> improve it explicitly for db_STRESS.. And I'm staying pretty honest\n>> when I'm testing something.\n>\n> Yeah but it's not really clear what that something is. I believe you\n> said upthread that PG 8.4 beta 1 is faster than PG 8.3.7, but PG 8.4\n> beta 1 is slower than MySQL 5.4 whereas PG 8.3.7 was faster than some\n> older version of MySQL. So PG got faster and MySQL got faster, but\n> they sped things up more than we did. If our performance were getting\n> WORSE, I'd be worried about that, but the fact that they were able to\n> make more improvement on this particular case than we were doesn't\n> excite me very much. Sure, I'd love it if PG were even faster than it\n> is, and if you have a suggested patch please send it in... or if you\n> want to profile it and send the results that would be great too. But\n> I guess my point is that the case of a very large number of\n> simultaneous users with pauses-for-thought between queries has already\n> been looked at in the very recent past in a way that's very similar to\n> what you are doing (and by someone who works at the same company you\n> do, no less!) so I'm not quite sure why we're rehashing the issue.\n>\n> ...Robert\n>\n", "msg_date": "Tue, 12 May 2009 19:00:26 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Andres Freund escribi�:\n\n> Naturally it would still be nice to be good in this not optimal workload...\n\nI find it hard to justify wasting our scarce development resources into\noptimizing such a contrived workload.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 12 May 2009 13:53:24 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 12, 2009 at 1:00 PM, Dimitri <[email protected]> wrote:\n> On MySQL there is no changes if I set the number of sessions in the\n> config file to 400 or to 2000 - for 2000 it'll just allocate more\n> memory.\n\nI don't care whether the setting affects the speed of MySQL. I want\nto know if it affects the speed of PostgreSQL.\n\n> After latest fix with default_statistics_target=5, version 8.3.7 is\n> running as fast as 8.4, even 8.4 is little little bit slower.\n>\n> I understand your position with a pooler, but I also want you think\n> about idea that 128 cores system will become a commodity server very\n> soon, and to use these cores on their full power you'll need a\n> database engine capable to run 256 users without pooler, because a\n> pooler will not help you here anymore..\n\nSo what? People with 128-core systems will not be running trivial\njoins that return in 1-2ms and have one second think times between\nthem. And if they are, and if they have nothing better to do than\nworry about whether MySQL can process those queries in 1/2000th of the\nthink time rather than 1/1000th of the think time, then they can use\nMySQL. If we're going to worry about performance on 128-core system,\nwe would be much better advised to put our efforts into parallel query\nexecution than how many microseconds it takes to execute very simple\nqueries.\n\nStill, I have no problem with making PostgreSQL faster in the case\nyou're describing. I'm just not interested in doing it on my own time\nfor free. I am sure there are a number of people who read this list\nregularly who would be willing to do it for money, though. Maybe even\nme. :-)\n\n...Robert\n", "msg_date": "Tue, 12 May 2009 13:57:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Although nobody wants to support it, he should try the patch that Jignesh K.\nShah (from Sun) proposed that makes ProcArrayLock lighter-weight. If it\nmakes 32 cores much faster, then we have a smoking gun.\n\nAlthough everyone here is talking about this as an 'unoptimal' solution, the\nfact is there is no evidence that a connection pooler will fix the\nscalability from 16 > 32 cores.\nCertainly a connection pooler will help most results, but it may not fix the\nscalability problem.\n\nA question for Dimitri:\nWhat is the scalability from 16 > 32 cores at the 'peak' load that occurs\nnear 2x the CPU count? Is it also poor? If this is also poor, IMO the\ncommunity here should not be complaining about this unopimal case -- a\nconnection pooler at that stage does little and prepared statements will\nincrease throughput but not likely alter scalability.\n\nIf that result scales, then the short term answer is a connection pooler.\n\nIn the tests that Jingesh ran -- making the ProcArrayLock faster helped the\ncase where connections = 2x the CPU core count quite a bit.\n\nThe thread about the CPU scalability is \"Proposal of tunable fix for\nscalability of 8.4\", originally posted by \"Jignesh K. Shah\"\n<[email protected]>, March 11 2009.\n\nIt would be very useful to see results of this benchmark with:\n1. A Connection Pooler\n2. Jignesh's patch\n3. Prepared statements\n\n#3 is important, because prepared statements are ideal for queries that\nperform well with low statistics_targets, and not ideal for those that\nrequire high statistics targets. Realistically, an app won't have more than\na couple dozen statement forms to prepare. Setting the default statistics\ntarget to 5 is just a way to make some other query perform bad.\n\n\nOn 5/12/09 10:53 AM, \"Alvaro Herrera\" <[email protected]> wrote:\n\n> Andres Freund escribió:\n> \n>> Naturally it would still be nice to be good in this not optimal workload...\n> \n> I find it hard to justify wasting our scarce development resources into\n> optimizing such a contrived workload.\n> \n> --\n> Alvaro Herrera http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 12 May 2009 11:30:04 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-12 at 11:30 -0700, Scott Carey wrote:\n> the fact is there is no evidence that a connection pooler will fix the\n> scalability from 16 > 32 cores.\n\nThere has been much analysis over a number of years of the effects of\nthe ProcArrayLock, specifically the O(N^2) effect of increasing numbers\nof connections on GetSnapshotData(). Most discussion has been on\n-hackers, not -perform.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 12 May 2009 19:46:57 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 12, 2009 at 5:49 PM, Tom Lane <[email protected]> wrote:\n> See previous discussions.  IIRC, there are two killer points:\n>\n> 1. There is no (portable) way to pass the connection from the postmaster\n> to another pre-existing process.\n\nThe Apache model is to have all the backends call accept. So incoming\nconnections don't get handled by a single master process, they get\nhandled by whichever process the kernel picks to receive the\nconnection.\n\n-- \ngreg\n", "msg_date": "Tue, 12 May 2009 20:15:52 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 12, 2009 at 12:49 PM, Tom Lane <[email protected]> wrote:\n> 1. There is no (portable) way to pass the connection from the postmaster\n> to another pre-existing process.\n\n[Googles.] It's not obvious to me that SCM_RIGHTS is non-portable,\nand Windows has an API call WSADuplicateSocket() specifically for this\npurpose.\n\n> 2. You'd have to track which database, and probably which user, each\n> such backend had been launched for; reconnecting a backend to a new\n> database is probably impractical and would certainly invalidate all\n> the caching.\n\nUser doesn't seem like a major problem, but I understand your point\nabout databases, which would presumably preclude the Apache approach\nof having every backend call accept() on the master socket.\n\n...Robert\n", "msg_date": "Tue, 12 May 2009 15:52:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-12 at 15:52 -0400, Robert Haas wrote:\n> On Tue, May 12, 2009 at 12:49 PM, Tom Lane <[email protected]> wrote:\n> > 1. There is no (portable) way to pass the connection from the postmaster\n> > to another pre-existing process.\n> \n> [Googles.] It's not obvious to me that SCM_RIGHTS is non-portable,\n> and Windows has an API call WSADuplicateSocket() specifically for this\n> purpose.\n\nRobert, Greg,\n\nTom's main point is it isn't worth doing. We have connection pooling\nsoftware that works well, very well. Why do we want to bring it into\ncore? (Think of the bugs we'd hit...) If we did, who would care?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 12 May 2009 21:24:41 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 12, 2009 at 4:24 PM, Simon Riggs <[email protected]> wrote:\n>\n> On Tue, 2009-05-12 at 15:52 -0400, Robert Haas wrote:\n>> On Tue, May 12, 2009 at 12:49 PM, Tom Lane <[email protected]> wrote:\n>> > 1. There is no (portable) way to pass the connection from the postmaster\n>> > to another pre-existing process.\n>>\n>> [Googles.]  It's not obvious to me that SCM_RIGHTS is non-portable,\n>> and Windows has an API call WSADuplicateSocket() specifically for this\n>> purpose.\n>\n> Robert, Greg,\n>\n> Tom's main point is it isn't worth doing. We have connection pooling\n> software that works well, very well. Why do we want to bring it into\n> core? (Think of the bugs we'd hit...) If we did, who would care?\n\nI don't know. It seems like it would be easier to manage just\nPostgreSQL than PostgreSQL + connection pooling software, but mostly I\nwas just curious whether it had been thought about, so I asked, and\nthe answer then led to a further question... was not intending to\nmake a big deal about it.\n\n...Robert\n", "msg_date": "Tue, 12 May 2009 16:37:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi,\n\nLe 12 mai 09 � 18:32, Robert Haas a �crit :\n> implement this same logic internally? IOW, when a client disconnects,\n> instead of having the backend exit immediately, have it perform the\n> equivalent of DISCARD ALL and then stick around for a minute or two\n> and, if a new connection request arrives within that time, have the\n> old backend handle the new connection...\n\nA much better idea to solve this, in my opinion, would be to have \npgbouncer as a postmaster child, integrated into PostgreSQL. It allows \nfor choosing whether you want session pooling, transaction pooling or \nstatement pooling, which is a more deterministic way to choose when \nyour client connection will benefit from a fresh backend or an \nexisting one. And it's respecting some backend timeouts etc.\nIt's Open-Source proven technology, and I think I've heard about some \nPostgreSQL distribution where it's already a postmaster's child.\n\n<handwaving>\nAnd when associated with Hot Standby (and Sync Wal Shipping), having a \nconnection pooler in -core could allow for transparent Read-Write \naccess to the slave: at the moment you need an XID (and when connected \non the slave), the backend could tell the pgbouncer process to \nredirect the connection to the master. With such a feature, you don't \nhave to build client side high availability, just connect to either \nthe master or the slave and be done with it, whatever the SQL you're \ngonna run.\n</>\n\n>\n\nRegards,\n-- \ndim", "msg_date": "Tue, 12 May 2009 23:21:52 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "I'm sorry, but I'm confused. Everyone keeps talking about connection\npooling, but Dimitri has said repeatedly that each client makes a\nsingle connection and then keeps it open until the end of the test,\nnot that it makes a single connection per SQL query. Connection\nstartup costs shouldn't be an issue. Am I missing something here?\ntest(N) starts N clients, each client creates a single connection and\nhammers the server for a while on that connection. test(N) is run for\nN=1,2,4,8...256. This seems like a very reasonable test scenario.\n\n--\nGlenn Maynard\n", "msg_date": "Tue, 12 May 2009 17:52:13 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, 2009-05-12 at 21:24 +0100, Simon Riggs wrote:\n> On Tue, 2009-05-12 at 15:52 -0400, Robert Haas wrote:\n> > On Tue, May 12, 2009 at 12:49 PM, Tom Lane <[email protected]> wrote:\n> > > 1. There is no (portable) way to pass the connection from the postmaster\n> > > to another pre-existing process.\n> > \n> > [Googles.] It's not obvious to me that SCM_RIGHTS is non-portable,\n> > and Windows has an API call WSADuplicateSocket() specifically for this\n> > purpose.\n> \n> Robert, Greg,\n> \n> Tom's main point is it isn't worth doing. We have connection pooling\n> software that works well, very well. Why do we want to bring it into\n> core? (Think of the bugs we'd hit...) If we did, who would care?\n\nI would.\n\nNot to knock poolers but they are limited and not very efficient. Heck\nthe best one I have used is pgbouncer and it has problems too under\nheavy load (due to libevent issues). It also doesn't support all of our\nauth methods.\n\nApache solved this problem back when it was still called NSCA HTTPD. Why\naren't we preforking again?\n\nJoshua D. Drake\n\n\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Tue, 12 May 2009 15:41:00 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri Fontaine escribi�:\n\n> A much better idea to solve this, in my opinion, would be to have \n> pgbouncer as a postmaster child, integrated into PostgreSQL. It allows \n> for choosing whether you want session pooling, transaction pooling or \n> statement pooling, which is a more deterministic way to choose when your \n> client connection will benefit from a fresh backend or an existing one. \n> And it's respecting some backend timeouts etc.\n\nHmm. Seems like the best idea if we go this route would be one of\nSimon's which was to have better support for pluggable postmaster\nchildren.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 12 May 2009 19:54:18 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "* Joshua D. Drake <[email protected]> [090512 19:27]:\n \n> Apache solved this problem back when it was still called NSCA HTTPD. Why\n> aren't we preforking again?\n\nOf course, preforking and connection pooling are totally different\nbeast...\n\nBut, what really does preforking give us? A 2 or 3% improvement? The\nforking isn't the expensive part, the per-database setup that happens is\nthe expensive setup... All pre-forking would save us is a tiny part of\nthe initial setup, and in turn make our robust postmaster controller no\nlonger have control.\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Tue, 12 May 2009 20:34:48 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, 2009-05-12 at 20:34 -0400, Aidan Van Dyk wrote:\n> * Joshua D. Drake <[email protected]> [090512 19:27]:\n> \n> > Apache solved this problem back when it was still called NSCA HTTPD. Why\n> > aren't we preforking again?\n> \n> Of course, preforking and connection pooling are totally different\n> beast...\n> \n\nYes and no. They both solve similar problems and preforking solves more\nproblems when you look at the picture in entirety (namely authentication\nintegration etc..)\n\n> But, what really does preforking give us? A 2 or 3% improvement?\n\nIt depends on the problem we are solving. We can test it but I would bet\nit is more than that especially in a high velocity environment.\n\n> The\n> forking isn't the expensive part,\n\nIt is expensive but not as expensive as the below.\n\n> the per-database setup that happens is\n> the expensive setup... All pre-forking would save us is a tiny part of\n> the initial setup, and in turn make our robust postmaster controller no\n> longer have control.\n\nI don't buy this. Properly coded we aren't going to lose any \"control\".\n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Tue, 12 May 2009 17:39:45 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "* Aidan Van Dyk ([email protected]) wrote:\n> But, what really does preforking give us? A 2 or 3% improvement? The\n> forking isn't the expensive part, the per-database setup that happens is\n> the expensive setup... \n\nObviously that begs the question- why not support pre-fork with specific\ndatabases associated with specific backends that do the per-database\nsetup prior to a connection coming in? eg- I want 5 backends ready per\nuser database (excludes template0, template1, postgres).\n\nThoughts?\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 12 May 2009 21:41:15 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/12/09, Robert Haas <[email protected]> wrote:\n> On Tue, May 12, 2009 at 1:00 PM, Dimitri <[email protected]> wrote:\n>> On MySQL there is no changes if I set the number of sessions in the\n>> config file to 400 or to 2000 - for 2000 it'll just allocate more\n>> memory.\n>\n> I don't care whether the setting affects the speed of MySQL. I want\n> to know if it affects the speed of PostgreSQL.\n\nthe problem is they both have \"max_connections\" parameter, so as you\nasked for MySQL I answered for MySQL, did not test yet for PostgreSQL,\nwill be in the next series..\n\n>\n>> After latest fix with default_statistics_target=5, version 8.3.7 is\n>> running as fast as 8.4, even 8.4 is little little bit slower.\n>>\n>> I understand your position with a pooler, but I also want you think\n>> about idea that 128 cores system will become a commodity server very\n>> soon, and to use these cores on their full power you'll need a\n>> database engine capable to run 256 users without pooler, because a\n>> pooler will not help you here anymore..\n>\n> So what? People with 128-core systems will not be running trivial\n> joins that return in 1-2ms and have one second think times between\n> them. And if they are, and if they have nothing better to do than\n> worry about whether MySQL can process those queries in 1/2000th of the\n> think time rather than 1/1000th of the think time, then they can use\n> MySQL. If we're going to worry about performance on 128-core system,\n> we would be much better advised to put our efforts into parallel query\n> execution than how many microseconds it takes to execute very simple\n> queries.\n\nDo you really think nowdays for example a web forum application having\nPG as a backend will have queries running slower than 1-2ms to print a\nthread message within your browser??? or banking transactions??\n\n>\n> Still, I have no problem with making PostgreSQL faster in the case\n> you're describing. I'm just not interested in doing it on my own time\n> for free. I am sure there are a number of people who read this list\n> regularly who would be willing to do it for money, though. Maybe even\n> me. :-)\n>\n> ...Robert\n>\n\nYou don't need to believe me, but I'm doing it for free - I still have\nmy work to finish in parallel :-)) And on the same time I don't see\nany other way to learn and improve my knowledge, but nobody is perfect\n:-))\n\nRgds,\n-Dimitri\n", "msg_date": "Wed, 13 May 2009 12:02:51 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Scott,\n\nOn 5/12/09, Scott Carey <[email protected]> wrote:\n> Although nobody wants to support it, he should try the patch that Jignesh K.\n> Shah (from Sun) proposed that makes ProcArrayLock lighter-weight. If it\n> makes 32 cores much faster, then we have a smoking gun.\n>\n> Although everyone here is talking about this as an 'unoptimal' solution, the\n> fact is there is no evidence that a connection pooler will fix the\n> scalability from 16 > 32 cores.\n> Certainly a connection pooler will help most results, but it may not fix the\n> scalability problem.\n>\n> A question for Dimitri:\n> What is the scalability from 16 > 32 cores at the 'peak' load that occurs\n> near 2x the CPU count? Is it also poor? If this is also poor, IMO the\n> community here should not be complaining about this unopimal case -- a\n> connection pooler at that stage does little and prepared statements will\n> increase throughput but not likely alter scalability.\n\nI'm attaching a small graph showing a TPS level on PG 8.4 depending on\nnumber of cores (X-axis is a number of concurrent users, Y-axis is the\nTPS number). As you may see TPS increase is near linear while moving\nfrom 8 to 16 cores, while on 32cores even it's growing slightly\ndifferently, what is unclear is why TPS level is staying limited to\n11.000 TPS on 32cores. And it's pure read-only workload.\n\n>\n> If that result scales, then the short term answer is a connection pooler.\n>\n> In the tests that Jingesh ran -- making the ProcArrayLock faster helped the\n> case where connections = 2x the CPU core count quite a bit.\n>\n> The thread about the CPU scalability is \"Proposal of tunable fix for\n> scalability of 8.4\", originally posted by \"Jignesh K. Shah\"\n> <[email protected]>, March 11 2009.\n>\n> It would be very useful to see results of this benchmark with:\n> 1. A Connection Pooler\n\nwill not help, as each client is *not* disconnecting/reconnecting\nduring the test, as well PG is keeping well even 256 users. And TPS\nlimit is reached already on 64 users, don't think pooler will help\nhere.\n\n> 2. Jignesh's patch\n\nI've already tested it and it did not help in my case because the real\nproblem is elsewhere.. (however, I did not test it yet with my latest\nconfig params)\n\n> 3. Prepared statements\n>\n\nyes, I'm preparing this test.\n\n> #3 is important, because prepared statements are ideal for queries that\n> perform well with low statistics_targets, and not ideal for those that\n> require high statistics targets. Realistically, an app won't have more than\n> a couple dozen statement forms to prepare. Setting the default statistics\n> target to 5 is just a way to make some other query perform bad.\n\nAgree, but as you may have a different statistic target *per* table it\nshould not be a problem. What is sure - all time spent on parse and\nplanner will be removed here, and the final time should be a pure\nexecution.\n\nRgds,\n-Dimitri\n\n>\n>\n> On 5/12/09 10:53 AM, \"Alvaro Herrera\" <[email protected]> wrote:\n>\n>> Andres Freund escribió:\n>>\n>>> Naturally it would still be nice to be good in this not optimal\n>>> workload...\n>>\n>> I find it hard to justify wasting our scarce development resources into\n>> optimizing such a contrived workload.\n>>\n>> --\n>> Alvaro Herrera\n>> http://www.CommandPrompt.com/\n>> The PostgreSQL Company - Command Prompt, Inc.\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>", "msg_date": "Wed, 13 May 2009 12:22:06 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "I'm also confused, but seems discussion giving also other ideas :-)\nBut yes, each client is connecting to the database server only *once*.\n\nTo presice how the test is running:\n - 1 client is started => 1 in total\n - sleep ...\n - 1 another client is started => 2 in total\n - sleep ..\n - 2 another clients are started => 4 in total\n - sleep ..\n ...\n ... =======> 256 in total\n - sleep ...\n - kill clients\n\nSo I even able to monitor how each new client impact all others. The\ntest kit is quite flexible to prepare any kind of stress situations.\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Glenn Maynard <[email protected]> wrote:\n> I'm sorry, but I'm confused. Everyone keeps talking about connection\n> pooling, but Dimitri has said repeatedly that each client makes a\n> single connection and then keeps it open until the end of the test,\n> not that it makes a single connection per SQL query. Connection\n> startup costs shouldn't be an issue. Am I missing something here?\n> test(N) starts N clients, each client creates a single connection and\n> hammers the server for a while on that connection. test(N) is run for\n> N=1,2,4,8...256. This seems like a very reasonable test scenario.\n>\n> --\n> Glenn Maynard\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 13 May 2009 12:29:25 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Glenn Maynard <[email protected]> wrote: \n> I'm sorry, but I'm confused. Everyone keeps talking about\n> connection pooling, but Dimitri has said repeatedly that each client\n> makes a single connection and then keeps it open until the end of\n> the test, not that it makes a single connection per SQL query. \n> Connection startup costs shouldn't be an issue. Am I missing\n> something here?\n \nQuite aside from the overhead of spawning new processes, if you have\nmore active connections than you have resources for them to go after,\nyou just increase context switching and resource contention, both of\nwhich have some cost, without any offsetting gains. That would tend\nto explain why performance tapers off after a certain point. A\nconnection pool which queues requests prevents this degradation.\n \nIt would be interesting, with each of the CPU counts, to profile\nPostgreSQL at the peak of each curve to see where the time goes when\nit is operating with an optimal poolsize. Tapering after that point\nis rather uninteresting, and profiles would be less useful beyond that\npoint, as the noise from the context switching and resource contention\nwould make it harder to spot issues that really matter..\n \n-Kevin\n", "msg_date": "Wed, 13 May 2009 09:32:23 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "The idea is good, but *only* pooling will be not enough. I mean if all\nwhat pooler is doing is only keeping no more than N backends working -\nit'll be not enough. You never know what exactly your query will do -\nif you choose your N value to be sure to not overload CPU and then\nsome of your queries start to read from disk - you waste your idle CPU\ntime because it was still possible to run other queries requiring CPU\ntime rather I/O, etc...\n\nI wrote some ideas about an \"ideal\" solution here (just omit the word\n\"mysql\" - as it's a theory it's valable for any db engine):\nhttp://dimitrik.free.fr/db_STRESS_MySQL_540_and_others_Apr2009.html#note_5442\n\nRgds,\n-Dimitri\n\nOn 5/13/09, Kevin Grittner <[email protected]> wrote:\n> Glenn Maynard <[email protected]> wrote:\n>> I'm sorry, but I'm confused. Everyone keeps talking about\n>> connection pooling, but Dimitri has said repeatedly that each client\n>> makes a single connection and then keeps it open until the end of\n>> the test, not that it makes a single connection per SQL query.\n>> Connection startup costs shouldn't be an issue. Am I missing\n>> something here?\n>\n> Quite aside from the overhead of spawning new processes, if you have\n> more active connections than you have resources for them to go after,\n> you just increase context switching and resource contention, both of\n> which have some cost, without any offsetting gains. That would tend\n> to explain why performance tapers off after a certain point. A\n> connection pool which queues requests prevents this degradation.\n>\n> It would be interesting, with each of the CPU counts, to profile\n> PostgreSQL at the peak of each curve to see where the time goes when\n> it is operating with an optimal poolsize. Tapering after that point\n> is rather uninteresting, and profiles would be less useful beyond that\n> point, as the noise from the context switching and resource contention\n> would make it harder to spot issues that really matter..\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 13 May 2009 18:16:31 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn 5/13/09 3:22 AM, \"Dimitri\" <[email protected]> wrote:\n\n> Hi Scott,\n> \n> On 5/12/09, Scott Carey <[email protected]> wrote:\n>> Although nobody wants to support it, he should try the patch that Jignesh K.\n>> Shah (from Sun) proposed that makes ProcArrayLock lighter-weight. If it\n>> makes 32 cores much faster, then we have a smoking gun.\n>> \n>> Although everyone here is talking about this as an 'unoptimal' solution, the\n>> fact is there is no evidence that a connection pooler will fix the\n>> scalability from 16 > 32 cores.\n>> Certainly a connection pooler will help most results, but it may not fix the\n>> scalability problem.\n>> \n>> A question for Dimitri:\n>> What is the scalability from 16 > 32 cores at the 'peak' load that occurs\n>> near 2x the CPU count? Is it also poor? If this is also poor, IMO the\n>> community here should not be complaining about this unopimal case -- a\n>> connection pooler at that stage does little and prepared statements will\n>> increase throughput but not likely alter scalability.\n> \n> I'm attaching a small graph showing a TPS level on PG 8.4 depending on\n> number of cores (X-axis is a number of concurrent users, Y-axis is the\n> TPS number). As you may see TPS increase is near linear while moving\n> from 8 to 16 cores, while on 32cores even it's growing slightly\n> differently, what is unclear is why TPS level is staying limited to\n> 11.000 TPS on 32cores. And it's pure read-only workload.\n> \n\nInteresting. What hardware is this, btw? Looks like the 32 core system\nprobably has 2x the CPU and a bit less interconnect efficiency versus the 16\ncore one (which would be typical).\nIs the 16 core case the same, but with fewer cores per processor active? Or\nfewer processors total?\nUnderstanding the scaling difference may require a better understanding of\nthe other differences besides core count.\n\n>> \n>> If that result scales, then the short term answer is a connection pooler.\n>> \n>> In the tests that Jingesh ran -- making the ProcArrayLock faster helped the\n>> case where connections = 2x the CPU core count quite a bit.\n>> \n>> The thread about the CPU scalability is \"Proposal of tunable fix for\n>> scalability of 8.4\", originally posted by \"Jignesh K. Shah\"\n>> <[email protected]>, March 11 2009.\n>> \n>> It would be very useful to see results of this benchmark with:\n>> 1. A Connection Pooler\n> \n> will not help, as each client is *not* disconnecting/reconnecting\n> during the test, as well PG is keeping well even 256 users. And TPS\n> limit is reached already on 64 users, don't think pooler will help\n> here.\n> \n\nActually, it might help a little. Postgres has a flaw that makes backends\nblock on a lock briefly based on the number of total backends -- active or\ncompletely passive. Your tool has some (very small) user-side delay and a\nconnection pooler would probably allow 64 of your users to efficiently 'fit'\nin 48 or so connection pooler slots.\n\nIt is not about connecting and disconnecting in this case, its about\nminimizing Postgres' process count. If this does help, it would hint at\ncertain bottlenecks. If it doesn't it would point elsewhere (and quiet some\ncritics).\n\nHowever, its unrealistic for any process-per-connection system to have less\nbackends than about 2x the core count -- else any waiting on I/O or network\nwill just starve CPU. So this would just be done for research, not a real\nanswer to making it scale better.\n\nFor those who say \"but, what if its I/O bound! You don't need more\nbackends then!\": Well you don't need more CPU either if you're I/O bound.\nBy definition, CPU scaling tests imply the I/O can keep up.\n\n\n>> 2. Jignesh's patch\n> \n> I've already tested it and it did not help in my case because the real\n> problem is elsewhere.. (however, I did not test it yet with my latest\n> config params)\n> \n\nGreat to hear that! -- That means this case is probably not ProcArrayLock.\nIf its Solaris, could we get:\n1. What is the CPU stats when it is in the inefficient state near 64 or 128\nconcurrent users (vmstat, etc. I'm interested in CPU in\nuser/system/idle/wait time, and context switches/sec mostly).\n2. A Dtrace probe on the postgres locks -- we might be able to identify\nsomething here.\n\nThe results here would be useful -- if its an expected condition in the\nplanner or parser, it would be useful confirmation. If its something\nunexpected and easy to fix -- it might be changed relatively soon.\n\nIf its not easy to detect, it could be many other things -- but the process\nabove at least rules some things out and better characterizes the state.\n\n>> 3. Prepared statements\n>> \n> \n> yes, I'm preparing this test.\n> \n>> #3 is important, because prepared statements are ideal for queries that\n>> perform well with low statistics_targets, and not ideal for those that\n>> require high statistics targets. Realistically, an app won't have more than\n>> a couple dozen statement forms to prepare. Setting the default statistics\n>> target to 5 is just a way to make some other query perform bad.\n> \n> Agree, but as you may have a different statistic target *per* table it\n> should not be a problem. What is sure - all time spent on parse and\n> planner will be removed here, and the final time should be a pure\n> execution.\n> \n\nI'm definitely interested here because although pure execution will\ncertainly be faster, it may not scale any better.\n\n\n> Rgds,\n> -Dimitri\n> \n\n", "msg_date": "Wed, 13 May 2009 09:42:42 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Dimitri <[email protected]> wrote: \n> The idea is good, but *only* pooling will be not enough. I mean if\n> all what pooler is doing is only keeping no more than N backends\n> working - it'll be not enough. You never know what exactly your\n> query will do - if you choose your N value to be sure to not\n> overload CPU and then some of your queries start to read from disk -\n> you waste your idle CPU time because it was still possible to run\n> other queries requiring CPU time rather I/O, etc...\n \nI never meant to imply that CPUs were the only resources which\nmattered. Network and disk I/O certainly come into play. I would\nthink that various locks might count. You have to benchmark your\nactual workload to find the sweet spot for your load on your hardware.\n I've usually found it to be around (2 * cpu count) + (effective\nspindle count), where effective spindle count id determined not only\nby your RAID also your access pattern. (If everything is fully\ncached, and you have no write delays because of a BBU RAID controller\nwith write-back, effective spindle count is zero.)\n \nSince the curve generally falls off more slowly past the sweet spot\nthan it climbs to get there, I tend to go a little above the apparent\nsweet spot to protect against bad performance in a different load mix\nthan my tests.\n \n> I wrote some ideas about an \"ideal\" solution here (just omit the\n> word \"mysql\" - as it's a theory it's valable for any db engine):\n>\nhttp://dimitrik.free.fr/db_STRESS_MySQL_540_and_others_Apr2009.html#note_5442\n \nI've seen similar techniques used in other databases, and I'm far from\nconvinced that it's ideal or optimal.\n \n-Kevin\n", "msg_date": "Wed, 13 May 2009 12:06:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi,\n\nLe 13 mai 09 � 18:42, Scott Carey a �crit :\n>> will not help, as each client is *not* disconnecting/reconnecting\n>> during the test, as well PG is keeping well even 256 users. And TPS\n>> limit is reached already on 64 users, don't think pooler will help\n>> here.\n>\n> Actually, it might help a little. Postgres has a flaw that makes \n> backends\n> block on a lock briefly based on the number of total backends -- \n> active or\n> completely passive. Your tool has some (very small) user-side delay \n> and a\n> connection pooler would probably allow 64 of your users to \n> efficiently 'fit'\n> in 48 or so connection pooler slots.\n\nIt seems you have think time, and I'm only insisting on what Scott \nsaid, but having thinktime means a connection pool can help. Pgbouncer \nis a good choice because it won't even attempt to parse the queries, \nand it has a flexible configuration.\n\n>>> 3. Prepared statements\n>> yes, I'm preparing this test.\n\nIt's possible to use prepared statement and benefit from pgbouncer at \nthe same time, but up until now it requires the application to test \nwhether its statements are already prepared at connection time, \nbecause the application is not controlling when pgbouncer is reusing \nan existing backend or giving it a fresh one.\n\nAs I think I need this solution too, I've coded a PG module to scratch \nthat itch this morning, and just published it (BSD licenced) on \npgfoundry:\n http://preprepare.projects.postgresql.org/README.html\n http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/preprepare/preprepare/\n\nWith this module and the proper pgbouncer setup (connect_query='SELECT \nprepare_all();') the application has no more to special case the fresh- \nbackend-nothing-prepared case, it's all transparent, just replace your \nSELECT query with its EXECUTE foo(x, y, z) counter part.\n\nI've took the approach to setup the prepared statements themselves \ninto a table with columns name and statement, this latter one \ncontaining the full PREPARE SQL command. There's a custom variable \npreprepare.relation that has to be your table name (shema qualified). \nEach statement that you then put in there will get prepared when you \nSELECT prepare_all();\n\nHope this helps, regards,\n-- \ndim", "msg_date": "Wed, 13 May 2009 23:23:15 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-12 at 14:28 +0200, Dimitri wrote:\n\n> As problem I'm considering a scalability issue on Read-Only workload -\n> only selects, no disk access, and if on move from 8 to 16 cores we\n> gain near 100%, on move from 16 to 32 cores it's only 10%...\n\nDimitri,\n\nWill you be re-running the Read-Only tests?\n\nCan you run the Dtrace script to assess LWlock contention during the\nrun?\n\nWould you re-run the tests with a patch?\n\nThanks,\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Thu, 14 May 2009 18:03:47 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Folks, sorry, I'm outpassed little bit by the events :-))\n\nI've finished tests with PREPARE/EXECUTE - it's much faster of course,\nand the max TSP is 15.000 now on 24 cores! - I've done various tests\nto see where is the limit bottleneck may be present - it's more likely\nsomething timer or interrupt based, etc. Nothing special via DTrace,\nor probably it'll say you more things then me, but for a 10sec period\nit's quite small wait time:\n\n# lwlock_wait_8.4.d `pgrep -n postgres`\n\n Lock Id Mode Count\n FirstBufMappingLock Exclusive 1\n FirstLockMgrLock Exclusive 1\n BufFreelistLock Exclusive 3\n FirstBufMappingLock Shared 4\n FirstLockMgrLock Shared 4\n\n Lock Id Mode Combined Time (ns)\n FirstLockMgrLock Exclusive 803700\n BufFreelistLock Exclusive 3001600\n FirstLockMgrLock Shared 4586600\n FirstBufMappingLock Exclusive 6283900\n FirstBufMappingLock Shared 21792900\n\nOn the same time those lock waits are appearing only on 24 or 32 cores.\nI'll plan to replay this case on the bigger server (64 cores or more)\n- it'll be much more evident if the problem is in locks.\n\nCurrently I'm finishing my report with all data all of you asked\n(system graphs, pgsql, and other). I'll publish it on my web site and\nsend you a link.\n\nRgds,\n-Dimitri\n\nOn 5/14/09, Simon Riggs <[email protected]> wrote:\n>\n> On Tue, 2009-05-12 at 14:28 +0200, Dimitri wrote:\n>\n>> As problem I'm considering a scalability issue on Read-Only workload -\n>> only selects, no disk access, and if on move from 8 to 16 cores we\n>> gain near 100%, on move from 16 to 32 cores it's only 10%...\n>\n> Dimitri,\n>\n> Will you be re-running the Read-Only tests?\n>\n> Can you run the Dtrace script to assess LWlock contention during the\n> run?\n>\n> Would you re-run the tests with a patch?\n>\n> Thanks,\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Thu, 14 May 2009 20:25:39 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "It's absolutely great!\nit'll not help here because a think time is 0.\nbut for any kind of solution with a spooler it's a must to try!\n\nRgds,\n-Dimitri\n\nOn 5/13/09, Dimitri Fontaine <[email protected]> wrote:\n> Hi,\n>\n> Le 13 mai 09 à 18:42, Scott Carey a écrit :\n>>> will not help, as each client is *not* disconnecting/reconnecting\n>>> during the test, as well PG is keeping well even 256 users. And TPS\n>>> limit is reached already on 64 users, don't think pooler will help\n>>> here.\n>>\n>> Actually, it might help a little. Postgres has a flaw that makes\n>> backends\n>> block on a lock briefly based on the number of total backends --\n>> active or\n>> completely passive. Your tool has some (very small) user-side delay\n>> and a\n>> connection pooler would probably allow 64 of your users to\n>> efficiently 'fit'\n>> in 48 or so connection pooler slots.\n>\n> It seems you have think time, and I'm only insisting on what Scott\n> said, but having thinktime means a connection pool can help. Pgbouncer\n> is a good choice because it won't even attempt to parse the queries,\n> and it has a flexible configuration.\n>\n>>>> 3. Prepared statements\n>>> yes, I'm preparing this test.\n>\n> It's possible to use prepared statement and benefit from pgbouncer at\n> the same time, but up until now it requires the application to test\n> whether its statements are already prepared at connection time,\n> because the application is not controlling when pgbouncer is reusing\n> an existing backend or giving it a fresh one.\n>\n> As I think I need this solution too, I've coded a PG module to scratch\n> that itch this morning, and just published it (BSD licenced) on\n> pgfoundry:\n> http://preprepare.projects.postgresql.org/README.html\n> http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/preprepare/preprepare/\n>\n> With this module and the proper pgbouncer setup (connect_query='SELECT\n> prepare_all();') the application has no more to special case the fresh-\n> backend-nothing-prepared case, it's all transparent, just replace your\n> SELECT query with its EXECUTE foo(x, y, z) counter part.\n>\n> I've took the approach to setup the prepared statements themselves\n> into a table with columns name and statement, this latter one\n> containing the full PREPARE SQL command. There's a custom variable\n> preprepare.relation that has to be your table name (shema qualified).\n> Each statement that you then put in there will get prepared when you\n> SELECT prepare_all();\n>\n> Hope this helps, regards,\n> --\n> dim\n", "msg_date": "Thu, 14 May 2009 20:28:23 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Hi Scott,\n\nlet me now finish my report and regroup all data together, and then\nwe'll continue discussion as it'll come more in debug/profile phase..\n- I'll be not polite from my part to send some tons of attachments to\nthe mail list :-)\n\nRgds,\n-Dimitri\n\nOn 5/13/09, Scott Carey <[email protected]> wrote:\n>\n> On 5/13/09 3:22 AM, \"Dimitri\" <[email protected]> wrote:\n>\n>> Hi Scott,\n>>\n>> On 5/12/09, Scott Carey <[email protected]> wrote:\n>>> Although nobody wants to support it, he should try the patch that Jignesh\n>>> K.\n>>> Shah (from Sun) proposed that makes ProcArrayLock lighter-weight. If it\n>>> makes 32 cores much faster, then we have a smoking gun.\n>>>\n>>> Although everyone here is talking about this as an 'unoptimal' solution,\n>>> the\n>>> fact is there is no evidence that a connection pooler will fix the\n>>> scalability from 16 > 32 cores.\n>>> Certainly a connection pooler will help most results, but it may not fix\n>>> the\n>>> scalability problem.\n>>>\n>>> A question for Dimitri:\n>>> What is the scalability from 16 > 32 cores at the 'peak' load that occurs\n>>> near 2x the CPU count? Is it also poor? If this is also poor, IMO the\n>>> community here should not be complaining about this unopimal case -- a\n>>> connection pooler at that stage does little and prepared statements will\n>>> increase throughput but not likely alter scalability.\n>>\n>> I'm attaching a small graph showing a TPS level on PG 8.4 depending on\n>> number of cores (X-axis is a number of concurrent users, Y-axis is the\n>> TPS number). As you may see TPS increase is near linear while moving\n>> from 8 to 16 cores, while on 32cores even it's growing slightly\n>> differently, what is unclear is why TPS level is staying limited to\n>> 11.000 TPS on 32cores. And it's pure read-only workload.\n>>\n>\n> Interesting. What hardware is this, btw? Looks like the 32 core system\n> probably has 2x the CPU and a bit less interconnect efficiency versus the 16\n> core one (which would be typical).\n> Is the 16 core case the same, but with fewer cores per processor active? Or\n> fewer processors total?\n> Understanding the scaling difference may require a better understanding of\n> the other differences besides core count.\n>\n>>>\n>>> If that result scales, then the short term answer is a connection pooler.\n>>>\n>>> In the tests that Jingesh ran -- making the ProcArrayLock faster helped\n>>> the\n>>> case where connections = 2x the CPU core count quite a bit.\n>>>\n>>> The thread about the CPU scalability is \"Proposal of tunable fix for\n>>> scalability of 8.4\", originally posted by \"Jignesh K. Shah\"\n>>> <[email protected]>, March 11 2009.\n>>>\n>>> It would be very useful to see results of this benchmark with:\n>>> 1. A Connection Pooler\n>>\n>> will not help, as each client is *not* disconnecting/reconnecting\n>> during the test, as well PG is keeping well even 256 users. And TPS\n>> limit is reached already on 64 users, don't think pooler will help\n>> here.\n>>\n>\n> Actually, it might help a little. Postgres has a flaw that makes backends\n> block on a lock briefly based on the number of total backends -- active or\n> completely passive. Your tool has some (very small) user-side delay and a\n> connection pooler would probably allow 64 of your users to efficiently 'fit'\n> in 48 or so connection pooler slots.\n>\n> It is not about connecting and disconnecting in this case, its about\n> minimizing Postgres' process count. If this does help, it would hint at\n> certain bottlenecks. If it doesn't it would point elsewhere (and quiet some\n> critics).\n>\n> However, its unrealistic for any process-per-connection system to have less\n> backends than about 2x the core count -- else any waiting on I/O or network\n> will just starve CPU. So this would just be done for research, not a real\n> answer to making it scale better.\n>\n> For those who say \"but, what if its I/O bound! You don't need more\n> backends then!\": Well you don't need more CPU either if you're I/O bound.\n> By definition, CPU scaling tests imply the I/O can keep up.\n>\n>\n>>> 2. Jignesh's patch\n>>\n>> I've already tested it and it did not help in my case because the real\n>> problem is elsewhere.. (however, I did not test it yet with my latest\n>> config params)\n>>\n>\n> Great to hear that! -- That means this case is probably not ProcArrayLock.\n> If its Solaris, could we get:\n> 1. What is the CPU stats when it is in the inefficient state near 64 or 128\n> concurrent users (vmstat, etc. I'm interested in CPU in\n> user/system/idle/wait time, and context switches/sec mostly).\n> 2. A Dtrace probe on the postgres locks -- we might be able to identify\n> something here.\n>\n> The results here would be useful -- if its an expected condition in the\n> planner or parser, it would be useful confirmation. If its something\n> unexpected and easy to fix -- it might be changed relatively soon.\n>\n> If its not easy to detect, it could be many other things -- but the process\n> above at least rules some things out and better characterizes the state.\n>\n>>> 3. Prepared statements\n>>>\n>>\n>> yes, I'm preparing this test.\n>>\n>>> #3 is important, because prepared statements are ideal for queries that\n>>> perform well with low statistics_targets, and not ideal for those that\n>>> require high statistics targets. Realistically, an app won't have more\n>>> than\n>>> a couple dozen statement forms to prepare. Setting the default\n>>> statistics\n>>> target to 5 is just a way to make some other query perform bad.\n>>\n>> Agree, but as you may have a different statistic target *per* table it\n>> should not be a problem. What is sure - all time spent on parse and\n>> planner will be removed here, and the final time should be a pure\n>> execution.\n>>\n>\n> I'm definitely interested here because although pure execution will\n> certainly be faster, it may not scale any better.\n>\n>\n>> Rgds,\n>> -Dimitri\n>>\n>\n>\n", "msg_date": "Thu, 14 May 2009 20:34:48 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Thu, 2009-05-14 at 20:25 +0200, Dimitri wrote:\n\n> # lwlock_wait_8.4.d `pgrep -n postgres`\n\n> Lock Id Mode Combined Time (ns)\n> FirstLockMgrLock Exclusive 803700\n> BufFreelistLock Exclusive 3001600\n> FirstLockMgrLock Shared 4586600\n> FirstBufMappingLock Exclusive 6283900\n> FirstBufMappingLock Shared 21792900\n\nI've published two patches to -Hackers to see if we can improve the read\nonly numbers on 32+ cores.\n\nTry shared_buffer_partitions = 256\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Mon, 18 May 2009 16:07:59 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Wed, 2009-05-13 at 23:23 +0200, Dimitri Fontaine wrote:\n\n> As I think I need this solution too, I've coded a PG module to\n> scratch \n> that itch this morning, and just published it (BSD licenced) on \n> pgfoundry:\n> http://preprepare.projects.postgresql.org/README.html\n> http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/preprepare/preprepare/\n\nLooks very cool Dimitri\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Mon, 18 May 2009 16:10:20 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Folks, I've just published a full report including all results here:\nhttp://dimitrik.free.fr/db_STRESS_PostgreSQL_837_and_84_May2009.html\n\n From my point of view it needs first to understand where the time is\nwasted on a single query (even when the statement is prepared it runs\nstill slower comparing to MySQL).\n\nThen to investigate on scalability issue I think a bigger server will\nbe needed here (I'm looking for 64cores at least :-))\n\nIf you have some other ideas or patches (like Simon) - don't hesitate\nto send them - once I'll get an access to the server again the\navailable test time will be very limited..\n\nBest regards!\n-Dimitri\n\n\nOn 5/18/09, Simon Riggs <[email protected]> wrote:\n>\n> On Thu, 2009-05-14 at 20:25 +0200, Dimitri wrote:\n>\n>> # lwlock_wait_8.4.d `pgrep -n postgres`\n>\n>> Lock Id Mode Combined Time (ns)\n>> FirstLockMgrLock Exclusive 803700\n>> BufFreelistLock Exclusive 3001600\n>> FirstLockMgrLock Shared 4586600\n>> FirstBufMappingLock Exclusive 6283900\n>> FirstBufMappingLock Shared 21792900\n>\n> I've published two patches to -Hackers to see if we can improve the read\n> only numbers on 32+ cores.\n>\n> Try shared_buffer_partitions = 256\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Mon, 18 May 2009 20:00:59 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Mon, 2009-05-18 at 20:00 +0200, Dimitri wrote:\n\n> >From my point of view it needs first to understand where the time is\n> wasted on a single query (even when the statement is prepared it runs\n> still slower comparing to MySQL).\n\nThere is still a significant number of things to say about these numbers\nand much tuning still to do, so I'm still confident of improving those\nnumbers if we needed to.\n\nIn particular, running the tests repeatedly using \n\tH.REF_OBJECT = '0000000001'\nrather than varying the value seems likely to benefit MySQL. The\ndistribution of values is clearly non-linear; while Postgres picks a\nstrange plan for that particular value, I would guess there are also\nvalues for which the MySQL plan is sub-optimal. Depending upon the\ndistribution of selected data we might see the results go either way.\n\nWhat I find worrying is your result of a scalability wall for hash\njoins. Is that a repeatable issue?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Mon, 18 May 2009 20:27:34 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Great data Dimitri!'\n\nI see a few key trends in the poor scalability:\n\nThe throughput scales roughly with %CPU fairly well. But CPU used doesn't\ngo past ~50% on the 32 core tests. This indicates lock contention.\n\nOther proof of lock contention are the mutex locks / sec graph which climbs\nrapidly as the system gets more inefficient (along with context switches).\n\nAnother trend is the system calls/sec which caps out with the test, at about\n400,000 per sec on the peak (non-prepared statement) result. Note that when\nthe buffer size is 256MB, the performance scales much worse and is slower.\nAnd correlated with this the system calls/sec per transaction is more than\ndouble, at slower throughput.\n\nUsing the OS to cache pages is not as fast as pages in shared_buffers, by a\nmore significant amount with many cores and higher concurrency than in the\nlow concurrency case.\n\nThe system is largely lock limited in the poor scaling results. This holds\ntrue with or without the use of prepared statements -- which help a some,\nbut not a lot and don't affect the scalability.\n\n\n4096MB shared buffers, 32 cores, 8.4, read only:\nhttp://dimitrik.free.fr/Report_20090505/5539_dim_STAT_70.html\n\n256MB cache, 32 cores, 8.4, read-only:\nhttp://dimitrik.free.fr/Report_20090505/5539_dim_STAT_52.html\n\n4096MB shared buffs, 32 cores, 8.4, read only, prepared statements\nhttp://dimitrik.free.fr/Report_20090505/5539_dim_STAT_70.html\n\nOn 5/18/09 11:00 AM, \"Dimitri\" <[email protected]> wrote:\n\n> Folks, I've just published a full report including all results here:\n> http://dimitrik.free.fr/db_STRESS_PostgreSQL_837_and_84_May2009.html\n> \n> From my point of view it needs first to understand where the time is\n> wasted on a single query (even when the statement is prepared it runs\n> still slower comparing to MySQL).\n> \n> Then to investigate on scalability issue I think a bigger server will\n> be needed here (I'm looking for 64cores at least :-))\n> \n> If you have some other ideas or patches (like Simon) - don't hesitate\n> to send them - once I'll get an access to the server again the\n> available test time will be very limited..\n> \n> Best regards!\n> -Dimitri\n> \n> \n> On 5/18/09, Simon Riggs <[email protected]> wrote:\n>> \n>> On Thu, 2009-05-14 at 20:25 +0200, Dimitri wrote:\n>> \n>>> # lwlock_wait_8.4.d `pgrep -n postgres`\n>> \n>>> Lock Id Mode Combined Time (ns)\n>>> FirstLockMgrLock Exclusive 803700\n>>> BufFreelistLock Exclusive 3001600\n>>> FirstLockMgrLock Shared 4586600\n>>> FirstBufMappingLock Exclusive 6283900\n>>> FirstBufMappingLock Shared 21792900\n>> \n>> I've published two patches to -Hackers to see if we can improve the read\n>> only numbers on 32+ cores.\n>> \n>> Try shared_buffer_partitions = 256\n>> \n>> --\n>> Simon Riggs www.2ndQuadrant.com\n>> PostgreSQL Training, Services and Support\n>> \n>> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Mon, 18 May 2009 12:37:48 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nWhat I don't understand is the part where you talking about disabling hash\njoins:\n\n> * result: planner replaced hash join is replaced by merge join\n> * execution time: 0.84ms !\n> * NOTE: curiously planner is expecting to execute this query in 0.29ms\n- so it's supposed from its logic to be faster, so why this plan is not used\nfrom the beginning???... \n>\n> Sort (cost=4562.83..4568.66 rows=2329 width=176) (actual\ntime=0.237..0.237 rows=20 loops=1)\n> Sort Key: h.horder\n> Sort Method: quicksort Memory: 30kB\n> -> Merge Join (cost=4345.89..4432.58 rows=2329 width=176)\n(actual time=0.065..0.216 rows=20 loops=1)\n> Merge Cond: (s.ref = h.ref_stat)\n> -> Index Scan using stat_ref_idx on stat s\n(cost=0.00..49.25 rows=1000 width=45) (actual time=0.018..0.089 rows=193\nloops=1)\n> -> Sort (cost=4345.89..4351.72 rows=2329 width=135)\n(actual time=0.042..0.043 rows=20 loops=1)\n> Sort Key: h.ref_stat\n> Sort Method: quicksort Memory: 30kB\n> -> Index Scan using history_ref_idx on history h\n(cost=0.00..4215.64 rows=2329 width=135) (actual time=0.012..0.025 rows=20\nloops=1)\n> Index Cond: (ref_object = '0000000001'::bpchar)\n> Total runtime: 0.288 ms\n> (12 rows)\n\nThe explain analyze ran the query in 0.288 ms. That is the actual time it\ntook to run the query on the server. It is not an estimate of the time.\nYou measured 0.84 ms to run the query, which seems to imply either a problem\nin one of the timing methods or that 66% of your query execution time is\nsending the results to the client. I'm curious how you did you execution\ntime measurements.\n\nDave\n\n", "msg_date": "Mon, 18 May 2009 14:44:34 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Thanks Dave for correction, but I'm also curious where the time is\nwasted in this case?..\n\n0.84ms is displayed by \"psql\" once the result output is printed, and I\ngot similar time within my client (using libpq) which is not printing\nany output..\n\nRgds,\n-Dimitri\n\nOn 5/18/09, Dave Dutcher <[email protected]> wrote:\n>\n> What I don't understand is the part where you talking about disabling hash\n> joins:\n>\n>> * result: planner replaced hash join is replaced by merge join\n>> * execution time: 0.84ms !\n>> * NOTE: curiously planner is expecting to execute this query in 0.29ms\n> - so it's supposed from its logic to be faster, so why this plan is not used\n> from the beginning???...\n>>\n>> Sort (cost=4562.83..4568.66 rows=2329 width=176) (actual\n> time=0.237..0.237 rows=20 loops=1)\n>> Sort Key: h.horder\n>> Sort Method: quicksort Memory: 30kB\n>> -> Merge Join (cost=4345.89..4432.58 rows=2329 width=176)\n> (actual time=0.065..0.216 rows=20 loops=1)\n>> Merge Cond: (s.ref = h.ref_stat)\n>> -> Index Scan using stat_ref_idx on stat s\n> (cost=0.00..49.25 rows=1000 width=45) (actual time=0.018..0.089 rows=193\n> loops=1)\n>> -> Sort (cost=4345.89..4351.72 rows=2329 width=135)\n> (actual time=0.042..0.043 rows=20 loops=1)\n>> Sort Key: h.ref_stat\n>> Sort Method: quicksort Memory: 30kB\n>> -> Index Scan using history_ref_idx on history h\n> (cost=0.00..4215.64 rows=2329 width=135) (actual time=0.012..0.025 rows=20\n> loops=1)\n>> Index Cond: (ref_object = '0000000001'::bpchar)\n>> Total runtime: 0.288 ms\n>> (12 rows)\n>\n> The explain analyze ran the query in 0.288 ms. That is the actual time it\n> took to run the query on the server. It is not an estimate of the time.\n> You measured 0.84 ms to run the query, which seems to imply either a problem\n> in one of the timing methods or that 66% of your query execution time is\n> sending the results to the client. I'm curious how you did you execution\n> time measurements.\n>\n> Dave\n>\n>\n", "msg_date": "Tue, 19 May 2009 00:32:43 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/18/09, Scott Carey <[email protected]> wrote:\n> Great data Dimitri!'\n\nThank you! :-)\n\n>\n> I see a few key trends in the poor scalability:\n>\n> The throughput scales roughly with %CPU fairly well. But CPU used doesn't\n> go past ~50% on the 32 core tests. This indicates lock contention.\n>\n\nYou should not look on #1 STATs, but on #2 - they are all with the\nlatest \"fixes\" - on all of them CPU is used well (90% in pic on\n32cores).\nAlso, keep in mind these cores are having 2 threads, and from Solaris\npoint of view they are seen as CPU (so 64 CPU) and %busy is accounted\nas for 64 CPU\n\n> Other proof of lock contention are the mutex locks / sec graph which climbs\n\nexactly, except no locking was seen on processes while I tried to\ntrace them.. What I think will be needed here is a global and\ncorelated tracing of all PG processes - I did not expect to do it now,\nbut next time\n\n> rapidly as the system gets more inefficient (along with context switches).\n>\n> Another trend is the system calls/sec which caps out with the test, at about\n> 400,000 per sec on the peak (non-prepared statement) result. Note that when\n> the buffer size is 256MB, the performance scales much worse and is slower.\n> And correlated with this the system calls/sec per transaction is more than\n> double, at slower throughput.\n\nof course, because even the data were cached by filesystem to get them\nyou still need to call a read() system call..\n\n>\n> Using the OS to cache pages is not as fast as pages in shared_buffers, by a\n> more significant amount with many cores and higher concurrency than in the\n> low concurrency case.\n\n\nexactly, it's what I also wanted to demonstrate because I often hear\n\"PG is delegating caching to the filesystem\" - and I don't think it's\noptimal :-)\n\n>\n> The system is largely lock limited in the poor scaling results. This holds\n> true with or without the use of prepared statements -- which help a some,\n> but not a lot and don't affect the scalability.\n\nwe are agree here, but again - 20K mutex spins/sec is a quite low\nvalue, that's why I hope on the bigger server it'll be more clear\nwhere is a bottleneck :-)\n\nRgds,\n-Dimitri\n\n\n>\n>\n> 4096MB shared buffers, 32 cores, 8.4, read only:\n> http://dimitrik.free.fr/Report_20090505/5539_dim_STAT_70.html\n>\n> 256MB cache, 32 cores, 8.4, read-only:\n> http://dimitrik.free.fr/Report_20090505/5539_dim_STAT_52.html\n>\n> 4096MB shared buffs, 32 cores, 8.4, read only, prepared statements\n> http://dimitrik.free.fr/Report_20090505/5539_dim_STAT_70.html\n>\n> On 5/18/09 11:00 AM, \"Dimitri\" <[email protected]> wrote:\n>\n>> Folks, I've just published a full report including all results here:\n>> http://dimitrik.free.fr/db_STRESS_PostgreSQL_837_and_84_May2009.html\n>>\n>> From my point of view it needs first to understand where the time is\n>> wasted on a single query (even when the statement is prepared it runs\n>> still slower comparing to MySQL).\n>>\n>> Then to investigate on scalability issue I think a bigger server will\n>> be needed here (I'm looking for 64cores at least :-))\n>>\n>> If you have some other ideas or patches (like Simon) - don't hesitate\n>> to send them - once I'll get an access to the server again the\n>> available test time will be very limited..\n>>\n>> Best regards!\n>> -Dimitri\n>>\n>>\n>> On 5/18/09, Simon Riggs <[email protected]> wrote:\n>>>\n>>> On Thu, 2009-05-14 at 20:25 +0200, Dimitri wrote:\n>>>\n>>>> # lwlock_wait_8.4.d `pgrep -n postgres`\n>>>\n>>>> Lock Id Mode Combined Time (ns)\n>>>> FirstLockMgrLock Exclusive 803700\n>>>> BufFreelistLock Exclusive 3001600\n>>>> FirstLockMgrLock Shared 4586600\n>>>> FirstBufMappingLock Exclusive 6283900\n>>>> FirstBufMappingLock Shared 21792900\n>>>\n>>> I've published two patches to -Hackers to see if we can improve the read\n>>> only numbers on 32+ cores.\n>>>\n>>> Try shared_buffer_partitions = 256\n>>>\n>>> --\n>>> Simon Riggs www.2ndQuadrant.com\n>>> PostgreSQL Training, Services and Support\n>>>\n>>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n", "msg_date": "Tue, 19 May 2009 00:32:54 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/18/09, Simon Riggs <[email protected]> wrote:\n>\n> On Mon, 2009-05-18 at 20:00 +0200, Dimitri wrote:\n>\n>> >From my point of view it needs first to understand where the time is\n>> wasted on a single query (even when the statement is prepared it runs\n>> still slower comparing to MySQL).\n>\n> There is still a significant number of things to say about these numbers\n> and much tuning still to do, so I'm still confident of improving those\n> numbers if we needed to.\n>\n> In particular, running the tests repeatedly using\n> \tH.REF_OBJECT = '0000000001'\n> rather than varying the value seems likely to benefit MySQL. The\n\nlet me repeat again - the reference is *random*,\nthe '0000000001' value I've used just to show a query execution\nplan.\n\nalso, what is important - the random ID is chosen in way that no one\nuser use the same to avoid deadlocks previously seen with PostgreSQL\n(see the \"Deadlock mystery\" note 2 years ago\nhttp://dimitrik.free.fr/db_STRESS_BMK_Part1.html#note_4355 )\n\n> distribution of values is clearly non-linear; while Postgres picks a\n> strange plan for that particular value, I would guess there are also\n> values for which the MySQL plan is sub-optimal. Depending upon the\n> distribution of selected data we might see the results go either way.\n>\n> What I find worrying is your result of a scalability wall for hash\n> joins. Is that a repeatable issue?\n\nI think yes (but of course I did not try to replay it several times)\n\nRgds,\n-Dimitri\n\n\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Tue, 19 May 2009 00:33:07 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> In particular, running the tests repeatedly using \n> \tH.REF_OBJECT = '0000000001'\n> rather than varying the value seems likely to benefit MySQL.\n\n... mumble ... query cache?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 May 2009 19:00:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.. " }, { "msg_contents": "\nOn 5/18/09 3:32 PM, \"Dimitri\" <[email protected]> wrote:\n\n> On 5/18/09, Scott Carey <[email protected]> wrote:\n>> Great data Dimitri!'\n> \n> Thank you! :-)\n> \n>> \n>> I see a few key trends in the poor scalability:\n>> \n>> The throughput scales roughly with %CPU fairly well. But CPU used doesn't\n>> go past ~50% on the 32 core tests. This indicates lock contention.\n>> \n> \n> You should not look on #1 STATs, but on #2 - they are all with the\n> latest \"fixes\" - on all of them CPU is used well (90% in pic on\n> 32cores).\n> Also, keep in mind these cores are having 2 threads, and from Solaris\n> point of view they are seen as CPU (so 64 CPU) and %busy is accounted\n> as for 64 CPU\n> \n\nWell, if the CPU usage is actually higher, then it might not be lock waiting\n-- it could be spin locks or context switches or cache coherency overhead.\nPostgres may also not be very SMT friendly, at least on the hardware tested\nhere.\n\n(what was the context switch rate? I didn't see that in the data, just\nmutex spins).\n\nThe scalability curve is definitely showing something. Prepared statements\nwere tried, as were most of the other suggestions other than one:\n\nWhat happens if the queries are more complicated (say, they take 15ms server\nside with a more complicated plan required)? That is a harder question to\nanswer\n\n", "msg_date": "Mon, 18 May 2009 17:51:19 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-19 at 00:33 +0200, Dimitri wrote:\n> >\n> > In particular, running the tests repeatedly using\n> > \tH.REF_OBJECT = '0000000001'\n> > rather than varying the value seems likely to benefit MySQL. The\n> \n> let me repeat again - the reference is *random*,\n> the '0000000001' value I've used just to show a query execution\n> plan.\n> \n> also, what is important - the random ID is chosen in way that no one\n> user use the same to avoid deadlocks previously seen with PostgreSQL\n> (see the \"Deadlock mystery\" note 2 years ago\n> http://dimitrik.free.fr/db_STRESS_BMK_Part1.html#note_4355 )\n\nOK, didn't pick up on that.\n\n(Like Tom, I was thinking query cache)\n\nCan you comment on the distribution of values for that column? If you\nare picking randomly, this implies distribution is uniform and so I am\nsurprised we are mis-estimating the selectivity.\n\n> I think yes (but of course I did not try to replay it several times)\n\nIf you could that would be appreciated. We don't want to go chasing\nafter something that is not repeatable.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 19 May 2009 08:25:45 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "No, Tom, the query cache was off.\nI put it always explicitly off on MySQL as it has scalability issues.\n\nRgds,\n-Dimitri\n\nOn 5/19/09, Tom Lane <[email protected]> wrote:\n> Simon Riggs <[email protected]> writes:\n>> In particular, running the tests repeatedly using\n>> \tH.REF_OBJECT = '0000000001'\n>> rather than varying the value seems likely to benefit MySQL.\n>\n> ... mumble ... query cache?\n>\n> \t\t\tregards, tom lane\n>\n", "msg_date": "Tue, 19 May 2009 12:24:58 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/19/09, Scott Carey <[email protected]> wrote:\n>\n> On 5/18/09 3:32 PM, \"Dimitri\" <[email protected]> wrote:\n>\n>> On 5/18/09, Scott Carey <[email protected]> wrote:\n>>> Great data Dimitri!'\n>>\n>> Thank you! :-)\n>>\n>>>\n>>> I see a few key trends in the poor scalability:\n>>>\n>>> The throughput scales roughly with %CPU fairly well. But CPU used\n>>> doesn't\n>>> go past ~50% on the 32 core tests. This indicates lock contention.\n>>>\n>>\n>> You should not look on #1 STATs, but on #2 - they are all with the\n>> latest \"fixes\" - on all of them CPU is used well (90% in pic on\n>> 32cores).\n>> Also, keep in mind these cores are having 2 threads, and from Solaris\n>> point of view they are seen as CPU (so 64 CPU) and %busy is accounted\n>> as for 64 CPU\n>>\n>\n> Well, if the CPU usage is actually higher, then it might not be lock waiting\n> -- it could be spin locks or context switches or cache coherency overhead.\n> Postgres may also not be very SMT friendly, at least on the hardware tested\n> here.\n\ndo you mean SMP or CMT? ;-)\nhowever both should work well with PostgreSQL. I also think about CPU\naffinity - probably it may help to avoid CPU cache misses - but makes\nsense mostly if pooler will be added as a part of PG.\n\n>\n> (what was the context switch rate? I didn't see that in the data, just\n> mutex spins).\n\nincreasing with a load, as this ex.:\nhttp://dimitrik.free.fr/Report_20090505/5539_dim_STAT_100.html#bmk_CPU_CtxSwitch_100\n\n\n>\n> The scalability curve is definitely showing something. Prepared statements\n> were tried, as were most of the other suggestions other than one:\n>\n> What happens if the queries are more complicated (say, they take 15ms server\n> side with a more complicated plan required)? That is a harder question to\n> answer\n\nWhat I observed is: if planner takes more long time (like initially\nwith 8.3.7 and analyze target 1000) the scalability problem is\nappearing more strange -\nhttp://dimitrik.free.fr/Report_20090505/5521_dim_STAT_18.html - as you\nsee CPU even not used more than 60% , and as you may see spin locks\nare lowering - CPUs are not spinning for locks, there is something\nelse..\nI'm supposing a problem of some kind of synchronization - background\nprocesses are not waking up on time or something like this...\nThen, if more time spent on the query execution itself and not planner:\n - if it'll be I/O time - I/O will hide everything else until you\nincrease a storage performance and/or add more RAM, but then you come\nback to the initial issue :-)\n - if it'll be a CPU time it may be interesting! :-)\n\nRgds,\n-Dimitri\n", "msg_date": "Tue, 19 May 2009 12:46:45 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/19/09, Simon Riggs <[email protected]> wrote:\n>\n> On Tue, 2009-05-19 at 00:33 +0200, Dimitri wrote:\n>> >\n>> > In particular, running the tests repeatedly using\n>> > \tH.REF_OBJECT = '0000000001'\n>> > rather than varying the value seems likely to benefit MySQL. The\n>>\n>> let me repeat again - the reference is *random*,\n>> the '0000000001' value I've used just to show a query execution\n>> plan.\n>>\n>> also, what is important - the random ID is chosen in way that no one\n>> user use the same to avoid deadlocks previously seen with PostgreSQL\n>> (see the \"Deadlock mystery\" note 2 years ago\n>> http://dimitrik.free.fr/db_STRESS_BMK_Part1.html#note_4355 )\n>\n> OK, didn't pick up on that.\n>\n> (Like Tom, I was thinking query cache)\n>\n> Can you comment on the distribution of values for that column? If you\n> are picking randomly, this implies distribution is uniform and so I am\n> surprised we are mis-estimating the selectivity.\n\nyes, the distribution of reference values is uniform between\n'0000000001' to '0010000000' (10M), only one OBJECT row by one\nreference, and only 20 rows with the same reference in HISTORY table.\n\n>\n>> I think yes (but of course I did not try to replay it several times)\n>\n> If you could that would be appreciated. We don't want to go chasing\n> after something that is not repeatable.\n\nI'll retry and let you know.\n\nRgds,\n-Dimitri\n", "msg_date": "Tue, 19 May 2009 12:51:56 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Mon, 2009-05-18 at 19:00 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > In particular, running the tests repeatedly using \n> > \tH.REF_OBJECT = '0000000001'\n> > rather than varying the value seems likely to benefit MySQL.\n\nOne thing to note in terms of optimisation of this query is that we\nperform a top-level sort at the end of the query.\n\nBoth plans for this query show an IndexScan on a two column-index, with\nan Index Condition of equality on the leading column. The ORDER BY\nspecifies a sort by the second index column, so the top-level Sort is\nsuperfluous in this case.\n\nMy understanding is that we don't currently eliminate superfluous\nadditional sorts of this kind. Now I know that is a hard subject, but it\nseems straightforward to consider interesting sort order equivalence\nwhen we have constant equality constraints.\n\nMy guess would be that MySQL does do the sort removal, in latest\nversion.\n\nDimitri's EXPLAIN ANALYZEs show differing costs for that additional\nstep, but the around 10% of query time looks shaveable.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 19 May 2009 11:57:26 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, 19 May 2009, Simon Riggs wrote:\n> Both plans for this query show an IndexScan on a two column-index, with\n> an Index Condition of equality on the leading column. The ORDER BY\n> specifies a sort by the second index column, so the top-level Sort is\n> superfluous in this case.\n>\n> My understanding is that we don't currently eliminate superfluous\n> additional sorts of this kind. Now I know that is a hard subject, but it\n> seems straightforward to consider interesting sort order equivalence\n> when we have constant equality constraints.\n\nYes, Postgres has been missing the boat on this one for a while. +1 on \nrequesting this feature.\n\nSpeaking of avoiding large sorts, I'd like to push again for partial \nsorts. This is the situation where an index provides data sorted by column \n\"a\", and the query requests data sorted by \"a, b\". Currently, Postgres \nsorts the entire data set, whereas it need only group each set of \nidentical \"a\" and sort each by \"b\".\n\nMatthew\n\n-- \n Riker: Our memory pathways have become accustomed to your sensory input.\n Data: I understand - I'm fond of you too, Commander. And you too Counsellor\n", "msg_date": "Tue, 19 May 2009 12:17:47 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-19 at 12:17 +0100, Matthew Wakeling wrote:\n> Yes, Postgres has been missing the boat on this one for a while. +1 on\n> requesting this feature.\n\nThat's an optimizer feature.\n\n> Speaking of avoiding large sorts, I'd like to push again for partial \n> sorts. This is the situation where an index provides data sorted by\n> column \"a\", and the query requests data sorted by \"a, b\". Currently,\n> Postgres sorts the entire data set, whereas it need only group each\n> set of identical \"a\" and sort each by \"b\".\n\nThis is an executor feature.\n\nPartially sorted data takes much less effort to sort (OK, not zero, I\ngrant) so this seems like a high complexity, lower value feature. I\nagree it should be on the TODO, just IMHO at a lower priority than some\nother features.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 19 May 2009 12:36:25 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-19 at 12:36 +0100, Simon Riggs wrote:\n\n> Partially sorted data takes much less effort to sort (OK, not zero, I\n> grant) so this seems like a high complexity, lower value feature. I\n> agree it should be on the TODO, just IMHO at a lower priority than some\n> other features.\n\nPerhaps its worth looking at a hybrid merge-join/hash-join that can cope\nwith data only mostly-sorted rather than fully sorted. That way we can\nprobably skip the partial sort altogether.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 19 May 2009 12:41:00 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "I may confirm the issue with hash join - it's repeating both with\nprepared and not prepared statements - it's curious because initially\nthe response time is lowering near ~1ms (the lowest seen until now)\nand then once workload growing to 16 sessions it's jumping to 2.5ms,\nthen with 32 sessions it's 18ms, etc..\n\nI've retested on 24 isolated cores, so any external secondary effects\nare avoided.\n\nRgds,\n-Dimitri\n\nOn 5/19/09, Dimitri <[email protected]> wrote:\n> On 5/19/09, Simon Riggs <[email protected]> wrote:\n>>\n>> On Tue, 2009-05-19 at 00:33 +0200, Dimitri wrote:\n>>> >\n>>> > In particular, running the tests repeatedly using\n>>> > \tH.REF_OBJECT = '0000000001'\n>>> > rather than varying the value seems likely to benefit MySQL. The\n>>>\n>>> let me repeat again - the reference is *random*,\n>>> the '0000000001' value I've used just to show a query execution\n>>> plan.\n>>>\n>>> also, what is important - the random ID is chosen in way that no one\n>>> user use the same to avoid deadlocks previously seen with PostgreSQL\n>>> (see the \"Deadlock mystery\" note 2 years ago\n>>> http://dimitrik.free.fr/db_STRESS_BMK_Part1.html#note_4355 )\n>>\n>> OK, didn't pick up on that.\n>>\n>> (Like Tom, I was thinking query cache)\n>>\n>> Can you comment on the distribution of values for that column? If you\n>> are picking randomly, this implies distribution is uniform and so I am\n>> surprised we are mis-estimating the selectivity.\n>\n> yes, the distribution of reference values is uniform between\n> '0000000001' to '0010000000' (10M), only one OBJECT row by one\n> reference, and only 20 rows with the same reference in HISTORY table.\n>\n>>\n>>> I think yes (but of course I did not try to replay it several times)\n>>\n>> If you could that would be appreciated. We don't want to go chasing\n>> after something that is not repeatable.\n>\n> I'll retry and let you know.\n>\n> Rgds,\n> -Dimitri\n>\n", "msg_date": "Tue, 19 May 2009 14:00:41 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, 19 May 2009, Simon Riggs wrote:\n>> Speaking of avoiding large sorts, I'd like to push again for partial\n>> sorts. This is the situation where an index provides data sorted by\n>> column \"a\", and the query requests data sorted by \"a, b\". Currently,\n>> Postgres sorts the entire data set, whereas it need only group each\n>> set of identical \"a\" and sort each by \"b\".\n>\n> Partially sorted data takes much less effort to sort (OK, not zero, I\n> grant) so this seems like a high complexity, lower value feature. I\n> agree it should be on the TODO, just IMHO at a lower priority than some\n> other features.\n\nNot arguing with you, however I'd like to point out that partial sorting \nallows the results to be streamed, which would lower the cost to produce \nthe first row of results significantly, and reduce the amount of RAM used \nby the query, and prevent temporary tables from being used. That has to be \na fairly major win. Queries with a LIMIT would see the most benefit.\n\nThat leads me on to another topic. Consider the query:\n\nSELECT * FROM table ORDER BY a, b\n\nwhere the column \"a\" is declared UNIQUE and has an index. Does Postgres \neliminate \"b\" from the ORDER BY, and therefore allow fetching without \nsorting from the index?\n\nOr how about this query:\n\nSELECT * FROM table1, table2 WHERE table1.fk = table2.id ORDER BY\n table1.id, table2.id\n\nwhere both \"id\" columns are UNIQUE with an index. Do we eliminate \n\"table2.id\" from the ORDER BY in this case?\n\nMatthew\n\n-- \n\"Programming today is a race between software engineers striving to build\n bigger and better idiot-proof programs, and the Universe trying to produce\n bigger and better idiots. So far, the Universe is winning.\" -- Rich Cook\n", "msg_date": "Tue, 19 May 2009 13:01:43 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> Both plans for this query show an IndexScan on a two column-index, with\n> an Index Condition of equality on the leading column. The ORDER BY\n> specifies a sort by the second index column, so the top-level Sort is\n> superfluous in this case.\n\n> My understanding is that we don't currently eliminate superfluous\n> additional sorts of this kind.\n\nNonsense. The planner might think some other plan is cheaper, but\nit definitely knows how to do this, and has since at least 8.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 May 2009 08:58:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.. " }, { "msg_contents": "On Mon, May 18, 2009 at 6:32 PM, Dimitri <[email protected]> wrote:\n> Thanks Dave for correction, but I'm also curious where the time is\n> wasted in this case?..\n>\n> 0.84ms is displayed by \"psql\" once the result output is printed, and I\n> got similar time within my client (using libpq) which is not printing\n> any output..\n\nUsing libpq? What is the exact method you are using to execute\nqueries...PQexec? If you are preparing queries against libpq, the\nbest way to execute queries is via PQexecPrepared. Also, it's\ninteresting to see if you can get any benefit from asynchronous\nqueries (PQsendPrepared), but this might involve more changes to your\napplication than you are willing to make.\n\nAnother note: I would like to point out again that there are possible\nnegative side effects in using char(n) vs. varchar(n) that IIRC do not\nexist in mysql. When you repeat your test I strongly advise switching\nto varchar.\n\nAnother question: how exactly are you connecting to the database?\nlocal machine? if so, domain socket or tcp/ip? What are you doing\nwith the results...immediately discarding?\n\nOne last thing: when you get access to the server, can you run a\ncustom format query test from pgbench and compare the results to your\ntest similarly configured (same number of backends, etc) in terms of\ntps?\n\nmerlin\n", "msg_date": "Tue, 19 May 2009 09:05:01 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-19 at 08:58 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > Both plans for this query show an IndexScan on a two column-index, with\n> > an Index Condition of equality on the leading column. The ORDER BY\n> > specifies a sort by the second index column, so the top-level Sort is\n> > superfluous in this case.\n> \n> > My understanding is that we don't currently eliminate superfluous\n> > additional sorts of this kind.\n> \n> Nonsense. The planner might think some other plan is cheaper, but\n> it definitely knows how to do this, and has since at least 8.1.\n\nPlease look at Dimitri's plan. If it can remove the pointless sort, why\ndoes it not do so?\n\nI agree that it will remove a Sort when the data is already has the\nexact same interesting sort order. In this case the sort order is not\nexactly the same, but looks fully removable to me.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 19 May 2009 14:10:29 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-19 at 13:01 +0100, Matthew Wakeling wrote:\n\n> That leads me on to another topic. Consider the query:\n> \n> SELECT * FROM table ORDER BY a, b\n> \n> where the column \"a\" is declared UNIQUE and has an index. Does Postgres \n> eliminate \"b\" from the ORDER BY, and therefore allow fetching without \n> sorting from the index?\n\nNo, because we don't use unique constraints much at all to infer things.\n\n> Or how about this query:\n> \n> SELECT * FROM table1, table2 WHERE table1.fk = table2.id ORDER BY\n> table1.id, table2.id\n> \n> where both \"id\" columns are UNIQUE with an index. Do we eliminate \n> \"table2.id\" from the ORDER BY in this case?\n\nYes, that is eliminated via equivalence classes.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 19 May 2009 14:19:52 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-19 at 14:00 +0200, Dimitri wrote:\n\n> I may confirm the issue with hash join - it's repeating both with\n> prepared and not prepared statements - it's curious because initially\n> the response time is lowering near ~1ms (the lowest seen until now)\n> and then once workload growing to 16 sessions it's jumping to 2.5ms,\n> then with 32 sessions it's 18ms, etc..\n\nIs it just bad all the time, or does it get worse over time?\n\nDo you get the same behaviour as 32 sessions if you run 16 sessions for\ntwice as long?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 19 May 2009 14:35:37 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On May 19, 2009, at 7:36 AM, Simon Riggs <[email protected]> wrote:\n\n>\n> On Tue, 2009-05-19 at 12:17 +0100, Matthew Wakeling wrote:\n>> Yes, Postgres has been missing the boat on this one for a while. +1 \n>> on\n>> requesting this feature.\n>\n> That's an optimizer feature.\n>\n>> Speaking of avoiding large sorts, I'd like to push again for partial\n>> sorts. This is the situation where an index provides data sorted by\n>> column \"a\", and the query requests data sorted by \"a, b\". Currently,\n>> Postgres sorts the entire data set, whereas it need only group each\n>> set of identical \"a\" and sort each by \"b\".\n>\n> This is an executor feature.\n>\n> Partially sorted data takes much less effort to sort (OK, not zero, I\n> grant) so this seems like a high complexity, lower value feature. I\n> agree it should be on the TODO, just IMHO at a lower priority than \n> some\n> other features.\n\nI have no particular thoughts on priority (whose priority?), but I \nwill say I've run across queries that could benefit from this \noptimization. I fairly often write queries where the first key is \nmostly unique and the second is just to make things deterministic in \nthe event of a tie. So the partial sort would be almost no work at all.\n\n...Robert\n\n\n>\n>\n> -- \n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 May 2009 10:00:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "The response time is not progressive, it's simply jumping, it's likely\nsince 16 sessions there is a sort of serialization happening\nsomewhere.. As well on 16 sessions the throughput in TPS is near the\nsame as on 8 (response time is only twice bigger for the moment), but\non 32 it's dramatically dropping down..\n\nRgds,\n-Dimitri\n\n\nOn 5/19/09, Simon Riggs <[email protected]> wrote:\n>\n> On Tue, 2009-05-19 at 14:00 +0200, Dimitri wrote:\n>\n>> I may confirm the issue with hash join - it's repeating both with\n>> prepared and not prepared statements - it's curious because initially\n>> the response time is lowering near ~1ms (the lowest seen until now)\n>> and then once workload growing to 16 sessions it's jumping to 2.5ms,\n>> then with 32 sessions it's 18ms, etc..\n>\n> Is it just bad all the time, or does it get worse over time?\n>\n> Do you get the same behaviour as 32 sessions if you run 16 sessions for\n> twice as long?\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n", "msg_date": "Tue, 19 May 2009 16:51:55 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/19/09, Merlin Moncure <[email protected]> wrote:\n> On Mon, May 18, 2009 at 6:32 PM, Dimitri <[email protected]> wrote:\n>> Thanks Dave for correction, but I'm also curious where the time is\n>> wasted in this case?..\n>>\n>> 0.84ms is displayed by \"psql\" once the result output is printed, and I\n>> got similar time within my client (using libpq) which is not printing\n>> any output..\n>\n> Using libpq? What is the exact method you are using to execute\n> queries...PQexec?\n\nexactly\n\n> If you are preparing queries against libpq, the\n> best way to execute queries is via PQexecPrepared.\n\nthe query is *once* prepared via PQexec,\nthen it's looping with \"execute\" via PQexec.\nWhy PQexecPrepared will be better in my case?..\n\n> Also, it's\n> interesting to see if you can get any benefit from asynchronous\n> queries (PQsendPrepared), but this might involve more changes to your\n> application than you are willing to make.\n>\n> Another note: I would like to point out again that there are possible\n> negative side effects in using char(n) vs. varchar(n) that IIRC do not\n> exist in mysql. When you repeat your test I strongly advise switching\n> to varchar.\n\nif it's true for any case, why not just replace CHAR implementation by\nVARCHAR directly within PG code?..\n\n>\n> Another question: how exactly are you connecting to the database?\n> local machine? if so, domain socket or tcp/ip?\n\nlocal TCP/IP, same as MySQL\n\n> What are you doing\n> with the results...immediately discarding?\n\nfrom PQ side they immediately discarded once all rows are fetched\n\n>\n> One last thing: when you get access to the server, can you run a\n> custom format query test from pgbench and compare the results to your\n> test similarly configured (same number of backends, etc) in terms of\n> tps?\n\nI'll try\n\n\nRgds,\n-Dimitri\n", "msg_date": "Tue, 19 May 2009 17:53:51 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn 5/19/09 5:01 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n\n> On Tue, 19 May 2009, Simon Riggs wrote:\n>>> Speaking of avoiding large sorts, I'd like to push again for partial\n>>> sorts. This is the situation where an index provides data sorted by\n>>> column \"a\", and the query requests data sorted by \"a, b\". Currently,\n>>> Postgres sorts the entire data set, whereas it need only group each\n>>> set of identical \"a\" and sort each by \"b\".\n>> \n>> Partially sorted data takes much less effort to sort (OK, not zero, I\n>> grant) so this seems like a high complexity, lower value feature. I\n>> agree it should be on the TODO, just IMHO at a lower priority than some\n>> other features.\n> \n> Not arguing with you, however I'd like to point out that partial sorting\n> allows the results to be streamed, which would lower the cost to produce\n> the first row of results significantly, and reduce the amount of RAM used\n> by the query, and prevent temporary tables from being used. That has to be\n> a fairly major win. Queries with a LIMIT would see the most benefit.\n> \n\nI will second that point --\nAlthough for smaller sorts, the partial sort doesn't help much and is just\ncomplicated -- once the sort is large, it reduces the amount of work_mem\nneeded significantly for large performance gain, and large concurrent query\nscale gain.\nAnd those benefits occur without using LIMIT.\n\n\n", "msg_date": "Tue, 19 May 2009 09:38:27 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn 5/19/09 3:46 AM, \"Dimitri\" <[email protected]> wrote:\n\n> On 5/19/09, Scott Carey <[email protected]> wrote:\n>> \n>> On 5/18/09 3:32 PM, \"Dimitri\" <[email protected]> wrote:\n>> \n>>> On 5/18/09, Scott Carey <[email protected]> wrote:\n>>>> Great data Dimitri!'\n>>> \n>>> Thank you! :-)\n>>> \n>>>> \n>>>> I see a few key trends in the poor scalability:\n>>>> \n>>>> The throughput scales roughly with %CPU fairly well. But CPU used\n>>>> doesn't\n>>>> go past ~50% on the 32 core tests. This indicates lock contention.\n>>>> \n>>> \n>>> You should not look on #1 STATs, but on #2 - they are all with the\n>>> latest \"fixes\" - on all of them CPU is used well (90% in pic on\n>>> 32cores).\n>>> Also, keep in mind these cores are having 2 threads, and from Solaris\n>>> point of view they are seen as CPU (so 64 CPU) and %busy is accounted\n>>> as for 64 CPU\n>>> \n>> \n>> Well, if the CPU usage is actually higher, then it might not be lock waiting\n>> -- it could be spin locks or context switches or cache coherency overhead.\n>> Postgres may also not be very SMT friendly, at least on the hardware tested\n>> here.\n> \n> do you mean SMP or CMT? ;-)\n> however both should work well with PostgreSQL. I also think about CPU\n> affinity - probably it may help to avoid CPU cache misses - but makes\n> sense mostly if pooler will be added as a part of PG.\n\nSymmetric Multi Threading (HyperThreading in Intels marketing terms, other\nmarketing terms for Sun or IBM). One CPU core that can handle more than one\nconcurrently executing thread.\nTechnically, 'SMT' allows instructions in flight from multiple threads at\nonce in a superscalar Cpu core while some implementations differ and might\ntechnically CMT (one thread or the other, but can switch fast, or a\nnon-superscalar core).\n\nFor many implementations of 'multiple threads on one CPU core' many of the\nprocessor resources are reduced per thread when it is active -- caches get\nsplit, instruction re-order buffers are split, etc. That is rather hardware\nimplementation dependant.\n\nFor Intel's SMT (and other similar), spin-locks hurt scalability if they\naren't using new special instructions for the spin to yield pipeline slots\nto the other thread.\n\nGenerally, code that stresses common processor resources more than CPU\nexecution will scale poorly with SMT/CMT etc.\n\nSo I'm not sure about the Postgres details, but the general case of an\napplication that doesn't benefit from these technologies exists, and there\nis a non-zero chance that Postgres has some characteristics of such an app.\n\n>> \n>> (what was the context switch rate? I didn't see that in the data, just\n>> mutex spins).\n> \n> increasing with a load, as this ex.:\n> http://dimitrik.free.fr/Report_20090505/5539_dim_STAT_100.html#bmk_CPU_CtxSwit\n> ch_100\n> \n\nWell, on most systems over 100K context switches/sec is a lot. And those\nreach 180000 /sec. \nHowever, this is 'only' 10 context switches per transaction and less than\n20% system CPU, so maybe those numbers aren't quite as big as they seem.\n\nOut of curiosity, what was the context switch rate for MySql at its peak\nthroughput?\n> \n>> \n>> The scalability curve is definitely showing something. Prepared statements\n>> were tried, as were most of the other suggestions other than one:\n>> \n>> What happens if the queries are more complicated (say, they take 15ms server\n>> side with a more complicated plan required)? That is a harder question to\n>> answer\n> \n> What I observed is: if planner takes more long time (like initially\n> with 8.3.7 and analyze target 1000) the scalability problem is\n> appearing more strange -\n> http://dimitrik.free.fr/Report_20090505/5521_dim_STAT_18.html - as you\n> see CPU even not used more than 60% , and as you may see spin locks\n> are lowering - CPUs are not spinning for locks, there is something\n> else..\n> I'm supposing a problem of some kind of synchronization - background\n> processes are not waking up on time or something like this...\n> Then, if more time spent on the query execution itself and not planner:\n> - if it'll be I/O time - I/O will hide everything else until you\n> increase a storage performance and/or add more RAM, but then you come\n> back to the initial issue :-)\n> - if it'll be a CPU time it may be interesting! :-)\n> \n> Rgds,\n> -Dimitri\n> \n\nOk, so that's good info that the planner or parser side seems to scale less\neffectively than the execution (as the results show), but I'm wondering\nabout queries with longer execution times not longer planner times. I'm\nwondering that, because its my opinion that most applications that will use\nlarger scale hardware will have more complicated queries than your test.\nIts also greedy on my part since most queries in my applications are\nsignificantly more complicated.\nRegardless of my opinions -- this test is on one extreme (small fast\nqueries) of the spectrum. Its useful to know some data points on other\nparts of the spectrum.\n\n", "msg_date": "Tue, 19 May 2009 10:13:25 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 19, 2009 at 11:53 AM, Dimitri <[email protected]> wrote:\n> On 5/19/09, Merlin Moncure <[email protected]> wrote:\n>> On Mon, May 18, 2009 at 6:32 PM, Dimitri <[email protected]> wrote:\n>>> Thanks Dave for correction, but I'm also curious where the time is\n>>> wasted in this case?..\n>>>\n>>> 0.84ms is displayed by \"psql\" once the result output is printed, and I\n>>> got similar time within my client (using libpq) which is not printing\n>>> any output..\n>>\n>> Using libpq?  What is the exact method you are using to execute\n>> queries...PQexec?\n>\n> exactly\n>\n>> If you are preparing queries against libpq, the\n>> best way to execute queries is via PQexecPrepared.\n>\n> the query is *once* prepared via PQexec,\n> then it's looping with \"execute\" via PQexec.\n> Why PQexecPrepared will be better in my case?..\n\nIt can be better or worse (usually better). the parameters are\nseparated from the query string. Regardless of performance, the\nparametrized interfaces are superior for any queries taking arguments\nand should be used when possible.\n\n>> Another note: I would like to point out again that there are possible\n>> negative side effects in using char(n) vs. varchar(n) that IIRC do not\n>> exist in mysql.  When you repeat your test I strongly advise switching\n>> to varchar.\n>\n> if it's true for any case, why not just replace CHAR implementation by\n> VARCHAR directly within PG code?..\n\nFirst, let me explain the difference. char(n) is padded out to 'n' on\ndisk and when returned. despite this, the length is still stored so\nthere is no real advantage to using the char(n) type except that the\nreturned string is of a guaranteed length. mysql, at least the\nparticular version and storage engine that I am logged into right now,\ndoes not do this for char(n). In other words, select cast('abc' as\nchar(50)) returns a string of 50 chars on pgsql and 3 chars on mysql.\nI will leave it as an exercise to the reader to figure out whom is\nfollowing the standard. pg's handling of the situation is not\nnecessarily optimal, but we just tell everyone to quit using 'char(n)'\ntype.\n\nUnless for example your 'NOTE' column is mostly full or mostly null,\nyour query is not fair because postgres has to both store and return a\nproportionally greater amount of data. This makes the comparison\nhardly apples to apples. This stuff counts when we are measuring at\nmicrosecond level.\n\n>> Another question: how exactly are you connecting to the database?\n>> local machine? if so, domain socket or tcp/ip?\n>\n> local TCP/IP, same as MySQL\n\nwould be curious to see if you get different results from domain socket.\n\nmerlin\n", "msg_date": "Tue, 19 May 2009 13:38:06 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/19/09, Scott Carey <[email protected]> wrote:\n>\n> On 5/19/09 3:46 AM, \"Dimitri\" <[email protected]> wrote:\n>\n>> On 5/19/09, Scott Carey <[email protected]> wrote:\n>>>\n>>> On 5/18/09 3:32 PM, \"Dimitri\" <[email protected]> wrote:\n>>>\n>>>> On 5/18/09, Scott Carey <[email protected]> wrote:\n>>>>> Great data Dimitri!'\n>>>>\n>>>> Thank you! :-)\n>>>>\n>>>>>\n>>>>> I see a few key trends in the poor scalability:\n>>>>>\n>>>>> The throughput scales roughly with %CPU fairly well. But CPU used\n>>>>> doesn't\n>>>>> go past ~50% on the 32 core tests. This indicates lock contention.\n>>>>>\n>>>>\n>>>> You should not look on #1 STATs, but on #2 - they are all with the\n>>>> latest \"fixes\" - on all of them CPU is used well (90% in pic on\n>>>> 32cores).\n>>>> Also, keep in mind these cores are having 2 threads, and from Solaris\n>>>> point of view they are seen as CPU (so 64 CPU) and %busy is accounted\n>>>> as for 64 CPU\n>>>>\n>>>\n>>> Well, if the CPU usage is actually higher, then it might not be lock\n>>> waiting\n>>> -- it could be spin locks or context switches or cache coherency\n>>> overhead.\n>>> Postgres may also not be very SMT friendly, at least on the hardware\n>>> tested\n>>> here.\n>>\n>> do you mean SMP or CMT? ;-)\n>> however both should work well with PostgreSQL. I also think about CPU\n>> affinity - probably it may help to avoid CPU cache misses - but makes\n>> sense mostly if pooler will be added as a part of PG.\n>\n> Symmetric Multi Threading (HyperThreading in Intels marketing terms, other\n> marketing terms for Sun or IBM). One CPU core that can handle more than one\n> concurrently executing thread.\n> Technically, 'SMT' allows instructions in flight from multiple threads at\n> once in a superscalar Cpu core while some implementations differ and might\n> technically CMT (one thread or the other, but can switch fast, or a\n> non-superscalar core).\n>\n> For many implementations of 'multiple threads on one CPU core' many of the\n> processor resources are reduced per thread when it is active -- caches get\n> split, instruction re-order buffers are split, etc. That is rather hardware\n> implementation dependant.\n>\n> For Intel's SMT (and other similar), spin-locks hurt scalability if they\n> aren't using new special instructions for the spin to yield pipeline slots\n> to the other thread.\n>\n> Generally, code that stresses common processor resources more than CPU\n> execution will scale poorly with SMT/CMT etc.\n\nAll application are scaling well anyway, except if you have any kind\nof lock contention inside of the application itself or meet any kind\nof system resource become hot. But well, here we may spend days to\ndiscuss :-)\n\n\n>\n> So I'm not sure about the Postgres details, but the general case of an\n> application that doesn't benefit from these technologies exists, and there\n> is a non-zero chance that Postgres has some characteristics of such an app.\n>\n>>>\n>>> (what was the context switch rate? I didn't see that in the data, just\n>>> mutex spins).\n>>\n>> increasing with a load, as this ex.:\n>> http://dimitrik.free.fr/Report_20090505/5539_dim_STAT_100.html#bmk_CPU_CtxSwit\n>> ch_100\n>>\n>\n> Well, on most systems over 100K context switches/sec is a lot. And those\n> reach 180000 /sec.\n> However, this is 'only' 10 context switches per transaction and less than\n> 20% system CPU, so maybe those numbers aren't quite as big as they seem.\n>\n> Out of curiosity, what was the context switch rate for MySql at its peak\n> throughput?\n\nthe main MySQL problem is a mutex locking like here:\nhttp://dimitrik.free.fr/Report_20090504/5465_dim_STAT_31.html#bmk_SpinMtx_31\nso you have to limit a number of active threads to lower this\ncontention (similar to pooler idea folks told here)\n\nand the context switch is even higher (~200K/sec)\n\n\n>>\n>>>\n>>> The scalability curve is definitely showing something. Prepared\n>>> statements\n>>> were tried, as were most of the other suggestions other than one:\n>>>\n>>> What happens if the queries are more complicated (say, they take 15ms\n>>> server\n>>> side with a more complicated plan required)? That is a harder question\n>>> to\n>>> answer\n>>\n>> What I observed is: if planner takes more long time (like initially\n>> with 8.3.7 and analyze target 1000) the scalability problem is\n>> appearing more strange -\n>> http://dimitrik.free.fr/Report_20090505/5521_dim_STAT_18.html - as you\n>> see CPU even not used more than 60% , and as you may see spin locks\n>> are lowering - CPUs are not spinning for locks, there is something\n>> else..\n>> I'm supposing a problem of some kind of synchronization - background\n>> processes are not waking up on time or something like this...\n>> Then, if more time spent on the query execution itself and not planner:\n>> - if it'll be I/O time - I/O will hide everything else until you\n>> increase a storage performance and/or add more RAM, but then you come\n>> back to the initial issue :-)\n>> - if it'll be a CPU time it may be interesting! :-)\n>>\n>> Rgds,\n>> -Dimitri\n>>\n>\n> Ok, so that's good info that the planner or parser side seems to scale less\n> effectively than the execution (as the results show), but I'm wondering\n> about queries with longer execution times not longer planner times. I'm\n> wondering that, because its my opinion that most applications that will use\n> larger scale hardware will have more complicated queries than your test.\n> Its also greedy on my part since most queries in my applications are\n> significantly more complicated.\n> Regardless of my opinions -- this test is on one extreme (small fast\n> queries) of the spectrum. Its useful to know some data points on other\n> parts of the spectrum.\n\nAs I've mentioned before, such fast queries are very common for some\napplications (like banking transactions, stock management, internet\nforums, etc.) - 5-7years ago the goal with this test was to keep\nresponse time under 1sec (I'm not kidding :-) but nowdays we're\nrunning under a millisecond.. Crazy progress, no? :-))\n\nHowever, I've started to extend db_STRESS kit to accept any kind of\nquery against any kind of db schema. So if you have an interesting\ndata model and some queries to run - I'll be happy to adapt them as a\nnew scenario! :-))\n\nRgds,\n-Dimitri\n\n\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 19 May 2009 20:52:53 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On 5/19/09, Merlin Moncure <[email protected]> wrote:\n> On Tue, May 19, 2009 at 11:53 AM, Dimitri <[email protected]> wrote:\n>> On 5/19/09, Merlin Moncure <[email protected]> wrote:\n>>> On Mon, May 18, 2009 at 6:32 PM, Dimitri <[email protected]> wrote:\n>>>> Thanks Dave for correction, but I'm also curious where the time is\n>>>> wasted in this case?..\n>>>>\n>>>> 0.84ms is displayed by \"psql\" once the result output is printed, and I\n>>>> got similar time within my client (using libpq) which is not printing\n>>>> any output..\n>>>\n>>> Using libpq? What is the exact method you are using to execute\n>>> queries...PQexec?\n>>\n>> exactly\n>>\n>>> If you are preparing queries against libpq, the\n>>> best way to execute queries is via PQexecPrepared.\n>>\n>> the query is *once* prepared via PQexec,\n>> then it's looping with \"execute\" via PQexec.\n>> Why PQexecPrepared will be better in my case?..\n>\n> It can be better or worse (usually better). the parameters are\n> separated from the query string. Regardless of performance, the\n> parametrized interfaces are superior for any queries taking arguments\n> and should be used when possible.\n\nyou're probably right, but I don't like either when solution become so\ncomplicated - PG has a so elegant way to execute a prepared query!\n\n\n>\n>>> Another note: I would like to point out again that there are possible\n>>> negative side effects in using char(n) vs. varchar(n) that IIRC do not\n>>> exist in mysql. When you repeat your test I strongly advise switching\n>>> to varchar.\n>>\n>> if it's true for any case, why not just replace CHAR implementation by\n>> VARCHAR directly within PG code?..\n>\n> First, let me explain the difference. char(n) is padded out to 'n' on\n> disk and when returned. despite this, the length is still stored so\n> there is no real advantage to using the char(n) type except that the\n> returned string is of a guaranteed length. mysql, at least the\n> particular version and storage engine that I am logged into right now,\n> does not do this for char(n). In other words, select cast('abc' as\n> char(50)) returns a string of 50 chars on pgsql and 3 chars on mysql.\n> I will leave it as an exercise to the reader to figure out whom is\n> following the standard. pg's handling of the situation is not\n> necessarily optimal, but we just tell everyone to quit using 'char(n)'\n> type.\n>\n> Unless for example your 'NOTE' column is mostly full or mostly null,\n> your query is not fair because postgres has to both store and return a\n> proportionally greater amount of data. This makes the comparison\n> hardly apples to apples. This stuff counts when we are measuring at\n> microsecond level.\n\nGood point! I may confirm only at least at the beginning all fields\nare fully filled within a database. Will test both engines with\nVARCHAR next time to be sure it's not an issue.\n\n\n>\n>>> Another question: how exactly are you connecting to the database?\n>>> local machine? if so, domain socket or tcp/ip?\n>>\n>> local TCP/IP, same as MySQL\n>\n> would be curious to see if you get different results from domain socket.\n\nat least for PG there was no difference if I remember well.\nHowever, before when I tested on the real network I finished by change\ncompletely my code to reduce a network traffic (initially I've used\ncursors), and finally PG traffic was lower or similar to MySQL, it was\nan interesting stuff too :-)\n\nRgds,\n-Dimitri\n\n\n>\n> merlin\n>\n", "msg_date": "Tue, 19 May 2009 21:15:13 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Tue, May 19, 2009 at 3:15 PM, Dimitri <[email protected]> wrote:\n> On 5/19/09, Merlin Moncure <[email protected]> wrote:\n>> On Tue, May 19, 2009 at 11:53 AM, Dimitri <[email protected]> wrote:\n>>> the query is *once* prepared via PQexec,\n>>> then it's looping with \"execute\" via PQexec.\n>>> Why PQexecPrepared will be better in my case?..\n>>\n>> It can be better or worse (usually better).  the parameters are\n>> separated from the query string.  Regardless of performance, the\n>> parametrized interfaces are superior for any queries taking arguments\n>> and should be used when possible.\n>\n> you're probably right, but I don't like either when solution become so\n> complicated - PG has a so elegant way to execute a prepared query!\n\nIt's not so bad.\n\nPQexec:\nsprintf(buf, query, char_arg1, my_arg2);\nPQexec(conn, query);\nsprintf(buf, query, char_arg1, my_arg2);\nPQexec(conn, query);\n\nPQexecParams:\nchar *vals[2];\nint formats[2] ={0,0};\nvals = {char_arg1, char_arg2};\nPQexecPrepared(conn, stmt, 2, vals, NULL, formats, 0);\nvals = {char_arg1, char_arg2};\nPQexecPrepared(conn, stmt, 2, vals, NULL, formats, 0);\n\nThe setup is a little rough, and 'non strings' can be a pain vs.\nprintf, but the queries are safer (goodbye sql injection) and usually\nfaster. Also the door is opened to binary formats which can be huge\nperformance win on some data types...especially bytea, date/time, and\ngeo. There are some good quality libraries out there to help dealing\nwith execparams family of functions :D.\n\nmerlin\n", "msg_date": "Tue, 19 May 2009 17:48:49 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Tue, 2009-05-19 at 08:58 -0400, Tom Lane wrote:\n>> Nonsense. The planner might think some other plan is cheaper, but\n>> it definitely knows how to do this, and has since at least 8.1.\n\n> Please look at Dimitri's plan. If it can remove the pointless sort, why\n> does it not do so?\n\nI haven't followed the whole thread, but the plan in the original post\nis for a hash join. The planner does not trust a hash join to preserve\nthe order of its left input, because of possible batching. See the\ndiscussion a couple of months ago where we considered allowing the\nplanner to disable batching so it *could* assume order preservation, and\ndecided the risk of hashtable bloat was too great.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 May 2009 18:49:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.. " }, { "msg_contents": "On Tue, May 19, 2009 at 6:49 PM, Tom Lane <[email protected]> wrote:\n> Simon Riggs <[email protected]> writes:\n>> On Tue, 2009-05-19 at 08:58 -0400, Tom Lane wrote:\n>>> Nonsense.  The planner might think some other plan is cheaper, but\n>>> it definitely knows how to do this, and has since at least 8.1.\n>\n>> Please look at Dimitri's plan. If it can remove the pointless sort, why\n>> does it not do so?\n>\n> I haven't followed the whole thread, but the plan in the original post\n> is for a hash join.  The planner does not trust a hash join to preserve\n> the order of its left input, because of possible batching.  See the\n> discussion a couple of months ago where we considered allowing the\n> planner to disable batching so it *could* assume order preservation, and\n> decided the risk of hashtable bloat was too great.\n\nHmm, my recollection of that conversation was that we decided that we\nshould have the planner tell the executor whether or not we are\nrelying on it to produce sorted output. We set this flag only when\nthe hash join is expected to fit comfortably within one batch (that\nis, we allow a safety margin). If this flag is set and a hash join\nunexpectedly goes multi-batch, then we perform a final merge pass (a\nla merge sort) before returning any results.\n\nI don't think it's a good idea to write off the idea of implementing\nthis optimization at some point. I see a lot of queries that join one\nfairly large table against a whole bunch of little tables, and then\nsorting the results by a column that is indexed in the big table. The\noptimizer handles this by sequentially scanning the big table, hash\njoining against all of the little tables, and then sorting the output,\nwhich is pretty silly (given that all of the tables fit in RAM and are\nin fact actually cached there). If there is a LIMIT clause, then it\nmight instead index-scan the big table, do the hash joins, and then\nsort the already-ordered results. This is better because at least\nwe're not sorting the entire table unnecessarily but it's still poor.\n\n...Robert\n", "msg_date": "Tue, 19 May 2009 23:54:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Tue, 2009-05-19 at 23:54 -0400, Robert Haas wrote:\n\n> I don't think it's a good idea to write off the idea of implementing\n> this optimization at some point. I see a lot of queries that join one\n> fairly large table against a whole bunch of little tables, and then\n> sorting the results by a column that is indexed in the big table. \n\nAgreed it's a common use case.\n\n> The\n> optimizer handles this by sequentially scanning the big table, hash\n> joining against all of the little tables, and then sorting the output,\n> which is pretty silly (given that all of the tables fit in RAM and are\n> in fact actually cached there). If there is a LIMIT clause, then it\n> might instead index-scan the big table, do the hash joins, and then\n> sort the already-ordered results. This is better because at least\n> we're not sorting the entire table unnecessarily but it's still poor.\n\nThe Hash node is fully executed before we start pulling rows through the\nHash Join node. So the Hash Join node will know at execution time\nwhether or not it will continue to maintain sorted order. So we put the\nSort node into the plan, then the Sort node can just ask the Hash Join\nat execution time whether it should perform a sort or just pass rows\nthrough (act as a no-op).\n\nThe cost of the Sort node can either be zero, or pro-rated down from the\nnormal cost based upon what we think the probability is of going\nmulti-batch, which would vary by work_mem available.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 20 May 2009 09:11:37 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "On Wed, May 20, 2009 at 4:11 AM, Simon Riggs <[email protected]> wrote:\n> The Hash node is fully executed before we start pulling rows through the\n> Hash Join node. So the Hash Join node will know at execution time\n> whether or not it will continue to maintain sorted order. So we put the\n> Sort node into the plan, then the Sort node can just ask the Hash Join\n> at execution time whether it should perform a sort or just pass rows\n> through (act as a no-op).\n\nIt's not actually a full sort. For example if the join has two\nbatches, you don't need to dump all of the tuples from both batches\ninto a sort. Each of the two tapes produced by the hash join is\nsorted, but if you read tape one and then tape two, of course then it\nwon't be. What you want to do is read the first tuple from each tape\nand return whichever one is smaller, and put the other one back; then\nlather, rinse, and repeat. Because it's such a special-case\ncomputation, I think you're going to want to implement it within the\nHashJoin node rather than inserting a Sort node (or any other kind).\n\n...Robert\n", "msg_date": "Wed, 20 May 2009 07:17:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." }, { "msg_contents": "\nOn Wed, 2009-05-20 at 07:17 -0400, Robert Haas wrote:\n> On Wed, May 20, 2009 at 4:11 AM, Simon Riggs <[email protected]> wrote:\n> > The Hash node is fully executed before we start pulling rows through the\n> > Hash Join node. So the Hash Join node will know at execution time\n> > whether or not it will continue to maintain sorted order. So we put the\n> > Sort node into the plan, then the Sort node can just ask the Hash Join\n> > at execution time whether it should perform a sort or just pass rows\n> > through (act as a no-op).\n> \n> It's not actually a full sort. For example if the join has two\n> batches, you don't need to dump all of the tuples from both batches\n> into a sort. Each of the two tapes produced by the hash join is\n> sorted, but if you read tape one and then tape two, of course then it\n> won't be. What you want to do is read the first tuple from each tape\n> and return whichever one is smaller, and put the other one back; then\n> lather, rinse, and repeat. Because it's such a special-case\n> computation, I think you're going to want to implement it within the\n> HashJoin node rather than inserting a Sort node (or any other kind).\n\nThat has wider applicability and seems sound. It will also be easier to\nassess a cost for that aspect in the optimizer. I like that approach.\n\nCode wise, you'll need to refactor things quite a lot to make the\ntuplesort code accessible to the HJ node. The sorting code is going to\nget pretty hectic if we add in all the ideas for this, partial sort,\nimproved sorting (at least 3 other ideas). Perhaps it will be easier to\nwrite a specific final merge routine just for HJs. \n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 20 May 2009 12:52:26 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any better plan for this query?.." } ]
[ { "msg_contents": "Hi,\n\nSome context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -\nthe 'datatable' in the example below although in order to improve\nperformance this table is partitioned (by date range) into a number of\npartition tables. Each partition contains up to 20GB of data (tens of\nmillons of rows), with an additional ~3GB of indexes, all this is\nserved off a fairly high performance server (8 core 32Gb, with FC\nattached SAN storage). PostgreSQL version is 8.3.5 (running on 64bit\nRHEL 5.2)\n\nThis has been working reasonably well, however in the last few days\nI've been seeing extremely slow performance on what are essentially\nfairly simple 'index hitting' selects on this data. From the host\nside I see that the postgres query process is mostly in IO wait,\nhowever there is very little data actually being transferred (maybe\n2-4 MB/s) - when a different query (say a select count(*) form\ndatatable) will yield a sustained 150+ MB/s. There have been no\nconfiguration changes during this time, although of course the\ndatabase has grown as data is added on a daily basis.\n\nI'm not sure of the best way to diagnose this issue - the possible\ncauses I can think of are:\n\n1. Problem with random versus sequential reads on storage system.\n2. 'Something' with PostgreSQL itself.\n3. Problem with the host environment - one suspicion I have here is\nthat we are >90% full on the storage drives (ext3), I'm not sure if\nthat is impacting performance.\n\nAny thoughts as to how to procede from here would be very welcome.\n\nHere is an example query plan - looks reasonable to me, seems is\nmaking use of the indexes and the constraint exclusion on the\npartition tables:\n\nNested Loop Left Join (cost=0.00..6462463.96 rows=1894 width=110)\n -> Append (cost=0.00..6453365.66 rows=1894 width=118)\n -> Seq Scan on datatable sum (cost=0.00..10.75 rows=1 width=118)\n Filter: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text) AND (NOT CASE WHEN (NOT obsolete)\nTHEN false ELSE CASE WHEN (obsoletereasonid IS NULL) THEN true WHEN\n(obsoletereasonid = 1) THEN true WHEN (obsoletereasonid = 2) THEN true\nWHEN (cdrdatasourceid = 1) THEN false ELSE true END END))\n -> Index Scan using\ndatatable_20090328_customeriddatapointdate_idx on datatable_20090328\nsum (cost=0.00..542433.51 rows=180 width=49)\n Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n -> Index Scan using\ndatatable_20090404_customeriddatapointdate_idx on datatable_20090404\nsum (cost=0.00..1322098.74 rows=405 width=48)\n Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n -> Index Scan using\ndatatable_20090411_customeriddatapointdate_idx on datatable_20090411\nsum (cost=0.00..1612744.29 rows=450 width=48)\n Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n -> Index Scan using\ndatatable_20090418_customeriddatapointdate_idx on datatable_20090418\nsum (cost=0.00..1641913.58 rows=469 width=49)\n Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n -> Index Scan using\ndatatable_20090425_customeriddatapointdate_idx on datatable_20090425\nsum (cost=0.00..1334164.80 rows=389 width=49)\n Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n -> Index Scan using pk_cdrextension on cdrextension ext\n(cost=0.00..4.77 rows=1 width=8)\n Index Cond: (sum.id = ext.datatableid)\n\n\nThanks,\n\nDavid.\n", "msg_date": "Thu, 7 May 2009 10:14:27 -0400", "msg_from": "David Brain <[email protected]>", "msg_from_op": true, "msg_subject": "Slow select performance despite seemingly reasonable query plan" }, { "msg_contents": "On Thu, May 7, 2009 at 10:14 AM, David Brain <[email protected]> wrote:\n\n> Hi,\n>\n> Some context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -\n> the 'datatable' in the example below although in order to improve\n> performance this table is partitioned (by date range) into a number of\n> partition tables. Each partition contains up to 20GB of data (tens of\n> millons of rows), with an additional ~3GB of indexes, all this is\n> served off a fairly high performance server (8 core 32Gb, with FC\n> attached SAN storage). PostgreSQL version is 8.3.5 (running on 64bit\n> RHEL 5.2)\n>\n> This has been working reasonably well, however in the last few days\n> I've been seeing extremely slow performance on what are essentially\n> fairly simple 'index hitting' selects on this data.\n\n\n Have you re-indexed any of your partitioned tables? If you're index is\nfragmented, you'll be incurring extra I/O's per index access. Take a look\nat the pgstattuple contrib for some functions to determine index\nfragmentation. You can also take a look at the pg_stat_all_indexes tables.\nIf your number of tup's fetched is 100 x more than your idx_scans, you *may*\nconsider reindexing.\n\n--Scott\n\nOn Thu, May 7, 2009 at 10:14 AM, David Brain <[email protected]> wrote:\nHi,\n\nSome context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -\nthe 'datatable' in the example below although in order to improve\nperformance this table is partitioned (by date range) into a number of\npartition tables.  Each partition contains up to 20GB of data (tens of\nmillons of rows), with an additional ~3GB of indexes, all this is\nserved off a fairly high performance server (8 core 32Gb, with FC\nattached SAN storage).  PostgreSQL version is 8.3.5 (running on 64bit\nRHEL 5.2)\n\nThis has been working reasonably well, however in the last few days\nI've been seeing extremely slow performance on what are essentially\nfairly simple 'index hitting' selects on this data.     Have you re-indexed any of your partitioned tables?  If you're index is fragmented, you'll be incurring extra I/O's per index access.  Take a look at the pgstattuple contrib for some functions to determine index fragmentation.  You can also take a look at the pg_stat_all_indexes tables.  If your number of tup's fetched is 100 x more than your idx_scans, you *may* consider reindexing.\n--Scott", "msg_date": "Thu, 7 May 2009 10:26:26 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select performance despite seemingly reasonable query plan" }, { "msg_contents": "Hi,\n\nInteresting, for one index on one partition:\n\nidx_scan: 329\nidx_tup_fetch: 8905730\n\nSo maybe a reindex would help?\n\nDavid.\n\nOn Thu, May 7, 2009 at 10:26 AM, Scott Mead\n<[email protected]> wrote:\n> On Thu, May 7, 2009 at 10:14 AM, David Brain <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> Some context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -\n>> the 'datatable' in the example below although in order to improve\n>> performance this table is partitioned (by date range) into a number of\n>> partition tables.  Each partition contains up to 20GB of data (tens of\n>> millons of rows), with an additional ~3GB of indexes, all this is\n>> served off a fairly high performance server (8 core 32Gb, with FC\n>> attached SAN storage).  PostgreSQL version is 8.3.5 (running on 64bit\n>> RHEL 5.2)\n>>\n>> This has been working reasonably well, however in the last few days\n>> I've been seeing extremely slow performance on what are essentially\n>> fairly simple 'index hitting' selects on this data.\n>\n>    Have you re-indexed any of your partitioned tables?  If you're index is\n> fragmented, you'll be incurring extra I/O's per index access.  Take a look\n> at the pgstattuple contrib for some functions to determine index\n> fragmentation.  You can also take a look at the pg_stat_all_indexes tables.\n> If your number of tup's fetched is 100 x more than your idx_scans, you *may*\n> consider reindexing.\n>\n> --Scott\n>\n>\n\n\n\n-- \nDavid Brain\[email protected]\n919.297.1078\n", "msg_date": "Thu, 7 May 2009 10:48:49 -0400", "msg_from": "David Brain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow select performance despite seemingly reasonable query plan" }, { "msg_contents": "On Thu, 7 May 2009, David Brain wrote:\n> This has been working reasonably well, however in the last few days\n> I've been seeing extremely slow performance on what are essentially\n> fairly simple 'index hitting' selects on this data. From the host\n> side I see that the postgres query process is mostly in IO wait,\n> however there is very little data actually being transferred (maybe\n> 2-4 MB/s) - when a different query (say a select count(*) form\n> datatable) will yield a sustained 150+ MB/s.\n\nHas there been a performance *change*, or are you just concerned about a \nquery which doesn't seem to use \"enough\" disc bandwidth?\n\n> 1. Problem with random versus sequential reads on storage system.\n\nCertainly random access like this index scan can be extremely slow. 2-4 \nMB/s is quite reasonable if you're fetching one 8kB block per disc seek - \nno more than 200 per second.\n\n> 3. Problem with the host environment - one suspicion I have here is\n> that we are >90% full on the storage drives (ext3), I'm not sure if\n> that is impacting performance.\n\nOne concern I might have with a big setup like that is how big the \ndatabase directory has got, and whether directory lookups are taking time. \nCheck to see if you have the directory_index option enabled on your ext3 \nfilesystem.\n\nMatthew\n\n-- \n The third years are wandering about all worried at the moment because they\n have to hand in their final projects. Please be sympathetic to them, say\n things like \"ha-ha-ha\", but in a sympathetic tone of voice \n -- Computer Science Lecturer\n", "msg_date": "Thu, 7 May 2009 15:53:00 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select performance despite seemingly reasonable\n query plan" }, { "msg_contents": "Hi,\n\nSome answers in-line:\n\n>\n> Has there been a performance *change*, or are you just concerned about a\n> query which doesn't seem to use \"enough\" disc bandwidth?\n\nPerformance has degraded noticeably over the past few days.\n\n> Certainly random access like this index scan can be extremely slow. 2-4 MB/s\n> is quite reasonable if you're fetching one 8kB block per disc seek - no more\n> than 200 per second.\n\nWe have read ahead set pretty aggressively high as the SAN seems to\n'like' this, given some testing we did:\n\n/sbin/blockdev --getra /dev/sdb\n16384\n\n\n> One concern I might have with a big setup like that is how big the database\n> directory has got, and whether directory lookups are taking time. Check to\n> see if you have the directory_index option enabled on your ext3 filesystem.\n>\n\nThat's a thought, I doubt the option is set (I didn't set it and I\ndon't _think_ rhel does by default), however the 'base' directory only\ncontains ~5500 items total, so it's not getting too out of hand.\n\nDavid\n", "msg_date": "Thu, 7 May 2009 11:11:45 -0400", "msg_from": "David Brain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow select performance despite seemingly reasonable query plan" }, { "msg_contents": ">\n> Nested Loop Left Join (cost=0.00..6462463.96 rows=1894 width=110)\n> -> Append (cost=0.00..6453365.66 rows=1894 width=118)\n> -> Seq Scan on datatable sum (cost=0.00..10.75 rows=1 width=118)\n> Filter: ((datapointdate >= '2009-04-01\n> 00:00:00'::timestamp without time zone) AND (datapointdate <=\n> '2009-04-30 23:59:59'::timestamp without time zone) AND\n> ((customerid)::text = 'xxxx'::text) AND (NOT CASE WHEN (NOT obsolete)\n> THEN false ELSE CASE WHEN (obsoletereasonid IS NULL) THEN true WHEN\n> (obsoletereasonid = 1) THEN true WHEN (obsoletereasonid = 2) THEN true\n> WHEN (cdrdatasourceid = 1) THEN false ELSE true END END))\n> -> Index Scan using\n> datatable_20090328_customeriddatapointdate_idx on datatable_20090328\n> sum (cost=0.00..542433.51 rows=180 width=49)\n> Index Cond: ((datapointdate >= '2009-04-01\n> 00:00:00'::timestamp without time zone) AND (datapointdate <=\n> '2009-04-30 23:59:59'::timestamp without time zone) AND\n> ((customerid)::text = 'xxxx'::text))\n> Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\n> CASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n> = 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n> (cdrdatasourceid = 1) THEN false ELSE true END END)\n> -> Index Scan using\n> datatable_20090404_customeriddatapointdate_idx on datatable_20090404\n> sum (cost=0.00..1322098.74 rows=405 width=48)\n> Index Cond: ((datapointdate >= '2009-04-01\n> 00:00:00'::timestamp without time zone) AND (datapointdate <=\n> '2009-04-30 23:59:59'::timestamp without time zone) AND\n> ((customerid)::text = 'xxxx'::text))\n> Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\n> CASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n> = 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n> (cdrdatasourceid = 1) THEN false ELSE true END END)\n> -> Index Scan using\n> datatable_20090411_customeriddatapointdate_idx on datatable_20090411\n> sum (cost=0.00..1612744.29 rows=450 width=48)\n> Index Cond: ((datapointdate >= '2009-04-01\n> 00:00:00'::timestamp without time zone) AND (datapointdate <=\n> '2009-04-30 23:59:59'::timestamp without time zone) AND\n> ((customerid)::text = 'xxxx'::text))\n> Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\n> CASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n> = 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n> (cdrdatasourceid = 1) THEN false ELSE true END END)\n> -> Index Scan using\n> datatable_20090418_customeriddatapointdate_idx on datatable_20090418\n> sum (cost=0.00..1641913.58 rows=469 width=49)\n> Index Cond: ((datapointdate >= '2009-04-01\n> 00:00:00'::timestamp without time zone) AND (datapointdate <=\n> '2009-04-30 23:59:59'::timestamp without time zone) AND\n> ((customerid)::text = 'xxxx'::text))\n> Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\n> CASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n> = 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n> (cdrdatasourceid = 1) THEN false ELSE true END END)\n> -> Index Scan using\n> datatable_20090425_customeriddatapointdate_idx on datatable_20090425\n> sum (cost=0.00..1334164.80 rows=389 width=49)\n> Index Cond: ((datapointdate >= '2009-04-01\n> 00:00:00'::timestamp without time zone) AND (datapointdate <=\n> '2009-04-30 23:59:59'::timestamp without time zone) AND\n> ((customerid)::text = 'xxxx'::text))\n> Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\n> CASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n> = 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n> (cdrdatasourceid = 1) THEN false ELSE true END END)\n> -> Index Scan using pk_cdrextension on cdrextension ext\n> (cost=0.00..4.77 rows=1 width=8)\n> Index Cond: (sum.id = ext.datatableid)\n>\n>\nSomething doesn't look right. Why is it doing an index scan on\ndatatable_20090404 when the constraint for that table puts it as entirely in\nthe date range? Shouldn't it just seq scan the partition or use the\npartition's customerid index?\n\n\nNested Loop Left Join  (cost=0.00..6462463.96 rows=1894 width=110)\n   ->  Append  (cost=0.00..6453365.66 rows=1894 width=118)\n         ->  Seq Scan on datatable sum  (cost=0.00..10.75 rows=1 width=118)\n               Filter: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text) AND (NOT CASE WHEN (NOT obsolete)\nTHEN false ELSE CASE WHEN (obsoletereasonid IS NULL) THEN true WHEN\n(obsoletereasonid = 1) THEN true WHEN (obsoletereasonid = 2) THEN true\nWHEN (cdrdatasourceid = 1) THEN false ELSE true END END))\n         ->  Index Scan using\ndatatable_20090328_customeriddatapointdate_idx on datatable_20090328\nsum  (cost=0.00..542433.51 rows=180 width=49)\n               Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n               Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n         ->  Index Scan using\ndatatable_20090404_customeriddatapointdate_idx on datatable_20090404\nsum  (cost=0.00..1322098.74 rows=405 width=48)\n               Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n               Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n         ->  Index Scan using\ndatatable_20090411_customeriddatapointdate_idx on datatable_20090411\nsum  (cost=0.00..1612744.29 rows=450 width=48)\n               Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n               Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n         ->  Index Scan using\ndatatable_20090418_customeriddatapointdate_idx on datatable_20090418\nsum  (cost=0.00..1641913.58 rows=469 width=49)\n               Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n               Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n         ->  Index Scan using\ndatatable_20090425_customeriddatapointdate_idx on datatable_20090425\nsum  (cost=0.00..1334164.80 rows=389 width=49)\n               Index Cond: ((datapointdate >= '2009-04-01\n00:00:00'::timestamp without time zone) AND (datapointdate <=\n'2009-04-30 23:59:59'::timestamp without time zone) AND\n((customerid)::text = 'xxxx'::text))\n               Filter: (NOT CASE WHEN (NOT obsolete) THEN false ELSE\nCASE WHEN (obsoletereasonid IS NULL) THEN true WHEN (obsoletereasonid\n= 1) THEN true WHEN (obsoletereasonid = 2) THEN true WHEN\n(cdrdatasourceid = 1) THEN false ELSE true END END)\n   ->  Index Scan using pk_cdrextension on cdrextension ext\n(cost=0.00..4.77 rows=1 width=8)\n         Index Cond: (sum.id = ext.datatableid)\n\nSomething doesn't look right.  Why is it doing an index scan on datatable_20090404 when the constraint for that table puts it as entirely in the date range? Shouldn't it just seq scan the partition or use the partition's customerid index?", "msg_date": "Thu, 7 May 2009 11:18:03 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select performance despite seemingly reasonable query plan" }, { "msg_contents": "On Thu, 7 May 2009, David Brain wrote:\n>> Certainly random access like this index scan can be extremely slow. 2-4 MB/s\n>> is quite reasonable if you're fetching one 8kB block per disc seek - no more\n>> than 200 per second.\n>\n> We have read ahead set pretty aggressively high as the SAN seems to\n> 'like' this, given some testing we did:\n>\n> /sbin/blockdev --getra /dev/sdb\n> 16384\n\nRead-ahead won't really help with completely random access.\n\nI think a much more interesting line of enquiry will be trying to work out \nwhat has changed, and why it was fast before.\n\nHow much of the data you're accessing are you expecting to be in the OS \ncache?\n\nIs the table you're index scanning on ordered at all? Could that have \nchanged recently?\n\n> That's a thought, I doubt the option is set (I didn't set it and I\n> don't _think_ rhel does by default), however the 'base' directory only\n> contains ~5500 items total, so it's not getting too out of hand.\n\nI think quite a few systems do set it by default now.\n\nMatthew\n\n-- \n Me... a skeptic? I trust you have proof?\n", "msg_date": "Thu, 7 May 2009 16:19:15 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select performance despite seemingly reasonable query plan" }, { "msg_contents": "On Thu, May 7, 2009 at 11:19 AM, Matthew Wakeling <[email protected]>wrote:\n\n> On Thu, 7 May 2009, David Brain wrote:\n>\n>> Certainly random access like this index scan can be extremely slow. 2-4\n>>> MB/s\n>>> is quite reasonable if you're fetching one 8kB block per disc seek - no\n>>> more\n>>> than 200 per second.\n>>>\n>>\n>> We have read ahead set pretty aggressively high as the SAN seems to\n>> 'like' this, given some testing we did:\n>>\n>> /sbin/blockdev --getra /dev/sdb\n>> 16384\n>>\n>\n> Read-ahead won't really help with completely random access.\n\n\nThats a shame because it would be really nice to get the entire index into\nshared memory or OS cache. Most of the time queries are on data in the past\nfew months. All of the indexes in the past few months should fit in cache.\n\nDid something happen to get those indexes flushed from the cache? Were they\nin the cache before?\n\n\n> I think a much more interesting line of enquiry will be trying to work out\n> what has changed, and why it was fast before.\n>\n> How much of the data you're accessing are you expecting to be in the OS\n> cache?\n>\n> Is the table you're index scanning on ordered at all? Could that have\n> changed recently?\n\n\nI wrote the application that puts data in that table. Its sort of ordered\nby that timestamp. Every five minutes it adds rows in no particular order\nthat need to be added. The rows that need to be added every five minutes\nare ordered by another timestamp that is correlated to but not the same as\nthe indexed timestamp.\n\n\n>\n>\n> That's a thought, I doubt the option is set (I didn't set it and I\n>> don't _think_ rhel does by default), however the 'base' directory only\n>> contains ~5500 items total, so it's not getting too out of hand.\n>>\n>\n> I think quite a few systems do set it by default now.\n>\n> Matthew\n>\n> --\n> Me... a skeptic? I trust you have proof?\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Thu, May 7, 2009 at 11:19 AM, Matthew Wakeling <[email protected]> wrote:\nOn Thu, 7 May 2009, David Brain wrote:\n\n\nCertainly random access like this index scan can be extremely slow. 2-4 MB/s\nis quite reasonable if you're fetching one 8kB block per disc seek - no more\nthan 200 per second.\n\n\nWe have read ahead set pretty aggressively high as the SAN seems to\n'like' this, given some testing we did:\n\n/sbin/blockdev --getra /dev/sdb\n16384\n\n\nRead-ahead won't really help with completely random access.Thats a shame because it would be really nice to get the entire index into shared memory or OS cache.  Most of the time queries are on data in the past few months.  All of the indexes in the past few months should fit in cache.\nDid something happen to get those indexes flushed from the cache?  Were they in the cache before?\n\nI think a much more interesting line of enquiry will be trying to work out what has changed, and why it was fast before.\n\nHow much of the data you're accessing are you expecting to be in the OS cache?\n\nIs the table you're index scanning on ordered at all? Could that have changed recently?I wrote the application that puts data in that table.  Its sort of ordered by that timestamp.  Every five minutes it adds rows in no particular order that need to be added.  The rows that need to be added every five minutes are ordered by another timestamp that is correlated to but not the same as the indexed timestamp.\n \n\n\nThat's a thought, I doubt the option is set (I didn't set it and I\ndon't _think_ rhel does by default), however the 'base' directory only\ncontains ~5500 items total, so it's not getting too out of hand.\n\n\nI think quite a few systems do set it by default now.\n\nMatthew\n\n-- \nMe... a skeptic?  I trust you have proof?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 7 May 2009 11:37:52 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select performance despite seemingly reasonable query plan" }, { "msg_contents": "2009/5/7 David Brain <[email protected]>:\n> Hi,\nHi,\n>\n> Some answers in-line:\n>\n>>\n>> Has there been a performance *change*, or are you just concerned about a\n>> query which doesn't seem to use \"enough\" disc bandwidth?\n>\n> Performance has degraded noticeably over the past few days.\n>\n>> Certainly random access like this index scan can be extremely slow. 2-4 MB/s\n>> is quite reasonable if you're fetching one 8kB block per disc seek - no more\n>> than 200 per second.\n>\n> We have read ahead set pretty aggressively high as the SAN seems to\n> 'like' this, given some testing we did:\n>\n> /sbin/blockdev --getra /dev/sdb\n> 16384\n>\n>\n>> One concern I might have with a big setup like that is how big the database\n>> directory has got, and whether directory lookups are taking time. Check to\n>> see if you have the directory_index option enabled on your ext3 filesystem.\n>>\n>\n> That's a thought, I doubt the option is set (I didn't set it and I\n> don't _think_ rhel does by default), however the 'base' directory only\n> contains ~5500 items total, so it's not getting too out of hand.\ndefault rhel ext3 options are (in 4.x and 5.x) :\nFilesystem features: has_journal ext_attr resize_inode dir_index\nfiletype needs_recovery sparse_super large_file\nSee tune2fs -l /dev/sdXY\nLaurent.\n", "msg_date": "Sat, 9 May 2009 11:43:05 +0200", "msg_from": "Laurent Wandrebeck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select performance despite seemingly reasonable query plan" } ]
[ { "msg_contents": "I have a query [1] that Postgres is insisting on using a Nested Loop\nfor some reason when a Hash Join is much faster. It seems like the\nestimates are way off. I've set default_statistics_target to 250, 500,\n1000 and analyzed, but they never seem to improve. If I disable\nnestloops, the query completes in around 3-5s. With them enabled, it\ntakes anywhere from 45 to 60 seconds. Here is the DDL for the tables\nand the month_last_day function [4].\n\nAny help would be appreciated!\n\nDavid Blewett\n\n1. http://dpaste.com/hold/41842/\n2. http://explain.depesz.com/s/Wg\n3. http://explain.depesz.com/s/1s\n4. http://dpaste.com/hold/41846/\n", "msg_date": "Thu, 7 May 2009 12:53:06 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "On Thu, May 7, 2009 at 12:53 PM, David Blewett <[email protected]> wrote:\n> 1. http://dpaste.com/hold/41842/\n> 2. http://explain.depesz.com/s/Wg\n> 3. http://explain.depesz.com/s/1s\n> 4. http://dpaste.com/hold/41846/\n\nForgot to mention that I'm using Postgres 8.3.6 on linux 2.6.24.\nShared buffers are set to 1GB, effective_cache_size is set to 3GB.\nServer has 6GB RAM, running on a SCSI 4-disk RAID10.\n\nDavid Blewett\n", "msg_date": "Thu, 7 May 2009 13:28:14 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "David Blewett <[email protected]> writes:\n> On Thu, May 7, 2009 at 12:53 PM, David Blewett <[email protected]> wrote:\n>> 1. http://dpaste.com/hold/41842/\n>> 2. http://explain.depesz.com/s/Wg\n>> 3. http://explain.depesz.com/s/1s\n>> 4. http://dpaste.com/hold/41846/\n\n> Forgot to mention that I'm using Postgres 8.3.6 on linux 2.6.24.\n\nWell, the reason it likes the nestloop plan is the estimate of just one\nrow out of the lower joins --- that case is pretty much always going to\nfavor a nestloop over other kinds of joins. If it were estimating even\nas few as ten rows out, it'd likely switch to a different plan. So the\nquestion to ask is why the rowcount estimates are so abysmally bad.\nYou mentioned having tried to increase the stats targets, but without\nseeing the actual stats data it's hard to speculate about this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 May 2009 16:31:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query " }, { "msg_contents": "On Thu, May 7, 2009 at 4:31 PM, Tom Lane <[email protected]> wrote:\n> as few as ten rows out, it'd likely switch to a different plan.  So the\n> So the question to ask is why the rowcount estimates are so abysmally bad.\n> You mentioned having tried to increase the stats targets, but without\n> seeing the actual stats data it's hard to speculate about this.\n\nHow do I get that data for you?\n\nDavid\n", "msg_date": "Thu, 7 May 2009 18:41:16 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "David Blewett <[email protected]> writes:\n> On Thu, May 7, 2009 at 4:31 PM, Tom Lane <[email protected]> wrote:\n>> as few as ten rows out, it'd likely switch to a different plan. �So the\n>> So the question to ask is why the rowcount estimates are so abysmally bad.\n>> You mentioned having tried to increase the stats targets, but without\n>> seeing the actual stats data it's hard to speculate about this.\n\n> How do I get that data for you?\n\nLook into pg_stats for the rows concerning the columns used in the\nquery's WHERE and JOIN/ON clauses.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 May 2009 18:44:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query " }, { "msg_contents": "On Thu, May 7, 2009 at 6:44 PM, Tom Lane <[email protected]> wrote:\n> Look into pg_stats for the rows concerning the columns used in the\n> query's WHERE and JOIN/ON clauses.\n\nOkay, here you go:\nhttp://rafb.net/p/20y8Oh72.html\n\nDavid\n", "msg_date": "Thu, 7 May 2009 18:56:35 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "David Blewett <[email protected]> writes:\n> On Thu, May 7, 2009 at 6:44 PM, Tom Lane <[email protected]> wrote:\n>> Look into pg_stats for the rows concerning the columns used in the\n>> query's WHERE and JOIN/ON clauses.\n\n> Okay, here you go:\n> http://rafb.net/p/20y8Oh72.html\n\nI got some time to poke into this, but didn't get very far --- the\njoins that seem to be the main problem involve\ncanvas_textresponse.submission_id which you didn't include stats for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 May 2009 20:08:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query " }, { "msg_contents": "David Blewett <[email protected]> writes:\n> Apparently there was a typo in the query that I didn't notice that\n> excluded that table's columns. Here is the new output including it:\n> http://pastesite.com/7017\n\nThanks. Could I trouble you for one other data point --- about how many\nrows are in each of these tables?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 May 2009 22:00:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query " }, { "msg_contents": "On Fri, May 8, 2009 at 10:00 PM, Tom Lane <[email protected]> wrote:\n> Thanks. Could I trouble you for one other data point --- about how many\n> rows are in each of these tables?\n\nNot a problem:\ncanvas_dateresponse 263819\ncanvas_foreignkeyresponse 646484\ncanvas_integerresponse 875375\ncanvas_submission 135949\ncanvas_textresponse 142698\n\nDavid\n\nOn Fri, May 8, 2009 at 10:00 PM, Tom Lane <[email protected]> wrote:> Thanks.  Could I trouble you for one other data point --- about how many> rows are in each of these tables?\nNot a problem:canvas_dateresponse          263819canvas_foreignkeyresponse\t646484canvas_integerresponse      875375canvas_submission              135949canvas_textresponse           142698David", "msg_date": "Sat, 9 May 2009 11:15:17 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "David Blewett <[email protected]> writes:\n> On Fri, May 8, 2009 at 10:00 PM, Tom Lane <[email protected]> wrote:\n>> Thanks. Could I trouble you for one other data point --- about how many\n>> rows are in each of these tables?\n\n> Not a problem:\n\nAs best I can tell, the selectivity numbers are about what they should\nbe --- for instance, using these stats I get a selectivity of 0.0000074\nfor the join clause fkr.submission_id = tr.submission_id. Over the\nentire relations (646484 and 142698 rows) that's predicting a join size\nof 683551, which seems to be in the right ballpark (it looks like\nactually it's one join row per canvas_foreignkeyresponse row, correct?).\nThe thing that is strange here is that the one-to-one ratio holds up\ndespite strong and apparently uncorrelated restrictions on the\nrelations:\n\n -> Hash Join (cost=1485.69..3109.78 rows=28 width=24) (actual time=5.576..22.737 rows=4035 loops=1)\n Hash Cond: (fkr.submission_id = tr.submission_id)\n -> Bitmap Heap Scan on canvas_foreignkeyresponse fkr (cost=14.52..1628.19 rows=580 width=4) (actual time=0.751..4.497 rows=4035 loops=1)\n Recheck Cond: ((question_id = ANY ('{79,1037}'::integer[])) AND (object_id < 3))\n -> Bitmap Index Scan on canvas_foreignkeyresponse_qv2_idx (cost=0.00..14.38 rows=580 width=0) (actual time=0.671..0.671 rows=4035 loops=1)\n Index Cond: ((question_id = ANY ('{79,1037}'::integer[])) AND (object_id < 3))\n -> Hash (cost=1388.48..1388.48 rows=6615 width=20) (actual time=4.805..4.805 rows=6694 loops=1)\n -> Bitmap Heap Scan on canvas_textresponse tr (cost=131.79..1388.48 rows=6615 width=20) (actual time=0.954..2.938 rows=6694 loops=1)\n Recheck Cond: (question_id = ANY ('{4,1044}'::integer[]))\n -> Bitmap Index Scan on canvas_textresponse_question_id (cost=0.00..130.14 rows=6615 width=0) (actual time=0.920..0.920 rows=6694 loops=1)\n Index Cond: (question_id = ANY ('{4,1044}'::integer[]))\n\nHow is it that each fkr row matching those question_ids has a join match\nin tr that has those other two question_ids? It seems like there must\nbe a whole lot of hidden correlation here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 May 2009 11:52:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query " }, { "msg_contents": "On Sat, May 9, 2009 at 11:52 AM, Tom Lane <[email protected]> wrote:\n\n> As best I can tell, the selectivity numbers are about what they should\n> be --- for instance, using these stats I get a selectivity of 0.0000074\n> for the join clause fkr.submission_id = tr.submission_id. Over the\n> entire relations (646484 and 142698 rows) that's predicting a join size\n> of 683551, which seems to be in the right ballpark (it looks like\n> actually it's one join row per canvas_foreignkeyresponse row, correct?).\n\n\nThe design is that for each submission_id there are any number of responses\nof different types. This particular questionnaire has 78 questions, 2 of\nwhich are text responses and 28 are foreignkey responses. The restrictions\non the question_id limit the rows returned from those tables to 1 each in\nthis case however. So yes, it's one to one in this case.\n\n\n> How is it that each fkr row matching those question_ids has a join match\n> in tr that has those other two question_ids? It seems like there must\n> be a whole lot of hidden correlation here.\n\n\nAs I mentioned before, they are all linked by the submission_id which\nindicates they are part of a single submission against a particular\nquestionnaire (chart_id in the ddl). It is a design that I based on Elein\nMustain's * <http://www.varlena.com/>*Question/Answer problem [1]. This\nparticular query includes 2 chart_id's because they contain virtually the\nsame data (sets of questions), but have different validation requirements.\nDoes that shed any more light?\n\nThanks again for the help.\n\nDavid\n\n1. http://www.varlena.com/GeneralBits/110.php\n\nOn Sat, May 9, 2009 at 11:52 AM, Tom Lane <[email protected]> wrote:\n\nAs best I can tell, the selectivity numbers are about what they should\nbe --- for instance, using these stats I get a selectivity of 0.0000074\nfor the join clause fkr.submission_id = tr.submission_id.  Over the\nentire relations (646484 and 142698 rows) that's predicting a join size\nof 683551, which seems to be in the right ballpark (it looks like\nactually it's one join row per canvas_foreignkeyresponse row, correct?).The design is that for each submission_id there are any number of responses of different types. This particular questionnaire has 78 questions, 2 of which are text responses and 28 are foreignkey responses. The restrictions on the question_id limit the rows returned from those tables to 1 each in this case however. So yes, it's one to one in this case. \n \nHow is it that each fkr row matching those question_ids has a join match\nin tr that has those other two question_ids?  It seems like there must\nbe a whole lot of hidden correlation here.As I mentioned before, they are all linked by the submission_id which indicates they are part of a single submission against a particular questionnaire (chart_id in the ddl). It is a design that I based on Elein Mustain's Question/Answer problem [1]. This particular query includes 2 chart_id's because they contain virtually the same data (sets of questions), but have different validation requirements. Does that shed any more light?\nThanks again for the help.David1. http://www.varlena.com/GeneralBits/110.php", "msg_date": "Sun, 10 May 2009 16:36:50 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "On Sat, May 9, 2009 at 11:52 AM, Tom Lane <[email protected]> wrote:\n\n> David Blewett <[email protected]> writes:\n> > On Fri, May 8, 2009 at 10:00 PM, Tom Lane <[email protected]> wrote:\n> >> Thanks. Could I trouble you for one other data point --- about how many\n> >> rows are in each of these tables?\n>\n> > Not a problem:\n>\n> As best I can tell, the selectivity numbers are about what they should\n> be --- for instance, using these stats I get a selectivity of 0.0000074\n> for the join clause fkr.submission_id = tr.submission_id. Over the\n> entire relations (646484 and 142698 rows) that's predicting a join size\n> of 683551, which seems to be in the right ballpark (it looks like\n> actually it's one join row per canvas_foreignkeyresponse row, correct?).\n\n\nI took the time to load this data into an 8.4beta2 install, and the same\nquery runs in a much more reasonable timeframe (~3s as opposed to ~50s). I\nset the statistics target to 500, and got this explain [1].\n\nDavid\n\n1. http://explain.depesz.com/s/pw\n\nOn Sat, May 9, 2009 at 11:52 AM, Tom Lane <[email protected]> wrote:\nDavid Blewett <[email protected]> writes:\n> On Fri, May 8, 2009 at 10:00 PM, Tom Lane <[email protected]> wrote:\n>> Thanks.  Could I trouble you for one other data point --- about how many\n>> rows are in each of these tables?\n\n> Not a problem:\n\nAs best I can tell, the selectivity numbers are about what they should\nbe --- for instance, using these stats I get a selectivity of 0.0000074\nfor the join clause fkr.submission_id = tr.submission_id.  Over the\nentire relations (646484 and 142698 rows) that's predicting a join size\nof 683551, which seems to be in the right ballpark (it looks like\nactually it's one join row per canvas_foreignkeyresponse row, correct?).I took the time to load this data into an 8.4beta2 install, and the same query runs in a much more reasonable timeframe (~3s as opposed to ~50s). I set the statistics target to 500, and got this explain [1].\nDavid1. http://explain.depesz.com/s/pw", "msg_date": "Fri, 22 May 2009 16:14:45 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "David Blewett <[email protected]> writes:\n> I took the time to load this data into an 8.4beta2 install, and the same\n> query runs in a much more reasonable timeframe (~3s as opposed to ~50s). I\n> set the statistics target to 500, and got this explain [1].\n> 1. http://explain.depesz.com/s/pw\n\nHmm... the join size estimates are no better than before, so I'm afraid\nthat 8.4 is just as vulnerable to picking a bad plan as the previous\nversions were. I don't think you should assume anything's been fixed.\n\nIt still feels like this schema design is obscuring correlations that\nthe planner needs to know about in order to make decent estimates.\nYou mentioned earlier that the seemingly unrelated question_ids were\nlinked via a common submission_id. I wonder whether it's possible to\nquery using the submission_id instead?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 May 2009 14:42:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query " }, { "msg_contents": "On Sun, May 24, 2009 at 2:42 PM, Tom Lane <[email protected]> wrote:\n>\n> It still feels like this schema design is obscuring correlations that\n> the planner needs to know about in order to make decent estimates.\n\n\nI'm not sure how to make the planner aware of these correlations. Is there\nsomething inherently flawed with this design? It seems pretty close to the\none on the Varlena website [1].\n\nYou mentioned earlier that the seemingly unrelated question_ids were\n> linked via a common submission_id. I wonder whether it's possible to\n> query using the submission_id instead?\n>\n\nWell, I do join the different response tables [text/date/etc] together via\nthe submission_id. However, in order to be able to apply the where clauses\nappropriately, I have to limit the responses to the appropriate\nquestion_id's. Would it matter to push that requirement down to the where\nclause instead of part of the join clause?\n\nDavid\n\n1. http://www.varlena.com/GeneralBits/110.php\n\nOn Sun, May 24, 2009 at 2:42 PM, Tom Lane <[email protected]> wrote:\n\nIt still feels like this schema design is obscuring correlations that\nthe planner needs to know about in order to make decent estimates.I'm not sure how to make the planner aware of these correlations. Is there something inherently flawed with this design? It seems pretty close to the one on the Varlena website [1]. \n\nYou mentioned earlier that the seemingly unrelated question_ids were\nlinked via a common submission_id.  I wonder whether it's possible to\nquery using the submission_id instead?Well, I do join the different response tables [text/date/etc] together via the submission_id. However, in order to be able to apply the where clauses appropriately, I have to limit the responses to the appropriate question_id's. Would it matter to push that requirement down to the where clause instead of part of the join clause?\nDavid1. http://www.varlena.com/GeneralBits/110.php", "msg_date": "Mon, 25 May 2009 11:22:38 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "On Mon, May 25, 2009 at 11:22 AM, David Blewett <[email protected]> wrote:\n> On Sun, May 24, 2009 at 2:42 PM, Tom Lane <[email protected]> wrote:\n>>\n>> It still feels like this schema design is obscuring correlations that\n>> the planner needs to know about in order to make decent estimates.\n>\n> I'm not sure how to make the planner aware of these correlations. Is there\n> something inherently flawed with this design? It seems pretty close to the\n> one on the Varlena website [1].\n>\n>> You mentioned earlier that the seemingly unrelated question_ids were\n>> linked via a common submission_id.  I wonder whether it's possible to\n>> query using the submission_id instead?\n>\n> Well, I do join the different response tables [text/date/etc] together via\n> the submission_id. However, in order to be able to apply the where clauses\n> appropriately, I have to limit the responses to the appropriate\n> question_id's. Would it matter to push that requirement down to the where\n> clause instead of part of the join clause?\n>\n> David\n>\n> 1. http://www.varlena.com/GeneralBits/110.php\n>\n\nAnyone have thoughts on this?\n\nBueller?\n\nDavid\n", "msg_date": "Fri, 5 Jun 2009 17:53:26 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "David,\n\nMy first thought would be to increase statistics dramatically on the \nfiltered columns in hopes of making PG realize there's a lot of rows \nthere; it's off by 8x. Correlations stats are an ongoing issue in \nPostgreSQL.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 05 Jun 2009 16:32:44 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "On Fri, Jun 5, 2009 at 7:32 PM, Josh Berkus <[email protected]> wrote:\n> My first thought would be to increase statistics dramatically on the\n> filtered columns in hopes of making PG realize there's a lot of rows there;\n> it's off by 8x.  Correlations stats are an ongoing issue in PostgreSQL.\n\nI started at a stats_target of 250, then tried 500 and finally the\nplan that I pasted before resorting to disabling nestloops was at 1000\n(and re-analyzing in between of course). Will a CLUSTER or REINDEX\nhelp at all?\n\nDavid\n", "msg_date": "Fri, 5 Jun 2009 20:29:45 -0400", "msg_from": "David Blewett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" }, { "msg_contents": "On Fri, Jun 5, 2009 at 8:29 PM, David Blewett<[email protected]> wrote:\n> On Fri, Jun 5, 2009 at 7:32 PM, Josh Berkus <[email protected]> wrote:\n>> My first thought would be to increase statistics dramatically on the\n>> filtered columns in hopes of making PG realize there's a lot of rows there;\n>> it's off by 8x.  Correlations stats are an ongoing issue in PostgreSQL.\n>\n> I started at a stats_target of 250, then tried 500 and finally the\n> plan that I pasted before resorting to disabling nestloops was at 1000\n> (and re-analyzing in between of course). Will a CLUSTER or REINDEX\n> help at all?\n\nProbably not. Your problem is similar to the one Anne Rosset was\ncomplaining about on -performance a couple of days ago, though your\ncase is appears to be more complex.\n\nhttp://archives.postgresql.org/pgsql-performance/2009-06/msg00023.php\n\nIt's really not clear what to do about this problem. In Anne's case,\nit would probably be enough to gather MCVs over the product space of\nher folder_id and is_deleted columns, but I'm not certain that would\nhelp you. It almost seems like we need a way to say \"for every\ndistinct value that appears in column X, you need to gather separate\nstatistics for the other columns of the table\". But that could make\nstatistics gathering and query planning very expensive.\n\nAnother angle of attack, which we've talked about before, is to teach\nthe executor that when a nestloop with a hash-joinable condition\nexecutes too many times, it should hash the inner side on the next\npass and then switch to a hash join.\n\nBut none of this helps you very much right now...\n\n...Robert\n", "msg_date": "Fri, 5 Jun 2009 22:02:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Plan for Questionnaire-Type Query" } ]
[ { "msg_contents": "Hi everybody,\n\nI'm wondering why a DELETE statement of mine does not make use of \ndefined indexes on the tables.\n\nI have the following tables which are linked as such: component -> \nrank -> node -> corpus;\n\nNow I want to delete all entries in component by giving a list of \ncorpus ids.\n\nThe query is as such:\n\nDELETE FROM component\nUSING corpus toplevel, corpus child, node, rank\nWHERE toplevel.id IN (25) AND toplevel.top_level = 'y'\nAND toplevel.pre <= child.pre AND toplevel.post >= child.pre\nAND node.corpus_ref = child.id AND rank.node_ref = node.id AND \nrank.component_ref = component.id;\n\nThe table corpus is defined as such:\n\n Table \"public.corpus\"\n Column | Type | Modifiers\n-----------+------------------------+-----------\n id | numeric(38,0) | not null\n name | character varying(100) | not null\n type | character varying(100) | not null\n version | character varying(100) |\n pre | numeric(38,0) | not null\n post | numeric(38,0) | not null\n top_level | boolean | not null\nIndexes:\n \"corpus_pkey\" PRIMARY KEY, btree (id)\n \"corpus_post_key\" UNIQUE, btree (post)\n \"corpus_pre_key\" UNIQUE, btree (pre)\n \"idx_corpus__id_pre_post\" btree (id, pre, post)\n \"idx_corpus__pre_post\" btree (pre, post)\n \"idx_corpus__toplevel\" btree (id) WHERE top_level = true\n\n\nThe query plan of the above statement looks like this:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Hash Join (cost=708.81..4141.14 rows=9614 width=6)\n Hash Cond: (rank.component_ref = component.id)\n -> Nested Loop (cost=3.20..3268.07 rows=8373 width=8)\n -> Hash Join (cost=3.20..1306.99 rows=4680 width=8)\n Hash Cond: (node.corpus_ref = child.id)\n -> Seq Scan on node (cost=0.00..1075.63 rows=48363 \nwidth=14)\n -> Hash (cost=3.16..3.16 rows=3 width=27)\n -> Nested Loop (cost=0.00..3.16 rows=3 width=27)\n Join Filter: ((toplevel.pre <= child.pre) \nAND (toplevel.post >= child.pre))\n -> Seq Scan on corpus toplevel \n(cost=0.00..1.39 rows=1 width=54)\n Filter: (top_level AND (id = \n25::numeric))\n -> Seq Scan on corpus child \n(cost=0.00..1.31 rows=31 width=54)\n -> Index Scan using fk_rank_2_struct on rank \n(cost=0.00..0.39 rows=2 width=16)\n Index Cond: (rank.node_ref = node.id)\n -> Hash (cost=390.27..390.27 rows=25227 width=14)\n -> Seq Scan on component (cost=0.00..390.27 rows=25227 \nwidth=14)\n(16 rows)\n\nSpecifically, I'm wondering why the innermost scan on corpus \n(toplevel) does not use the index idx_corpus__toplevel and why the \njoin between corpus (toplevel) and corpus (child) is not a merge join \nusing the index corpus_pre_key to access the child table.\n\nFYI, corpus.pre and corpus.post encode a corpus tree (or rather a \nforest) using a combined pre and post order. This scheme guarantees \nthat parent.post > child.post > child.pre for all edges parent -> \nchild in the corpus tree. I'm using the same scheme elsewhere in \nSELECT statements and they work fine there.\n\nThanks,\nViktor\n", "msg_date": "Fri, 8 May 2009 00:38:58 +0200", "msg_from": "Viktor Rosenfeld <[email protected]>", "msg_from_op": true, "msg_subject": "Indexes not used in DELETE" }, { "msg_contents": "Viktor Rosenfeld <[email protected]> writes:\n> -> Seq Scan on corpus toplevel (cost=0.00..1.39 rows=1 width=54)\n> Filter: (top_level AND (id = 25::numeric))\n\n> Specifically, I'm wondering why the innermost scan on corpus \n> (toplevel) does not use the index idx_corpus__toplevel\n\nThe cost estimate indicates that there are so few rows in corpus\nthat an indexscan would be a waste of time.\n\n> and why the \n> join between corpus (toplevel) and corpus (child) is not a merge join \n> using the index corpus_pre_key to access the child table.\n\nSame answer. Populate the table and the plan will change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 May 2009 19:06:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes not used in DELETE " }, { "msg_contents": "Hi Tom,\n\nI should have looked at the analyzed plan first. The culprit for the \nslow query were trigger function calls on foreign keys.\n\nCiao,\nViktor\n\nAm 08.05.2009 um 01:06 schrieb Tom Lane:\n\n> Viktor Rosenfeld <[email protected]> writes:\n>> -> Seq Scan on corpus toplevel \n>> (cost=0.00..1.39 rows=1 width=54)\n>> Filter: (top_level AND (id = \n>> 25::numeric))\n>\n>> Specifically, I'm wondering why the innermost scan on corpus\n>> (toplevel) does not use the index idx_corpus__toplevel\n>\n> The cost estimate indicates that there are so few rows in corpus\n> that an indexscan would be a waste of time.\n>\n>> and why the\n>> join between corpus (toplevel) and corpus (child) is not a merge join\n>> using the index corpus_pre_key to access the child table.\n>\n> Same answer. Populate the table and the plan will change.\n>\n> \t\t\tregards, tom lane\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 8 May 2009 12:17:53 +0200", "msg_from": "Viktor Rosenfeld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes not used in DELETE " } ]
[ { "msg_contents": "Rohan Pethkar wants you to join Yaari!\n\nIs Rohan your friend?\n\n<a href=\"http://yaari.com/?controller=user&action=mailregister&friend=1&sign=YaariFTL554VBZ483SLB511BDO643\">Yes, Rohan is my friend!</a> <a href=\"http://yaari.com/?controller=user&action=mailregister&friend=0&sign=YaariFTL554VBZ483SLB511BDO643\">No, Rohan isn't my friend.</a>\n\nPlease respond or Rohan may think you said no :(\n\nThanks,\nThe Yaari Team\n\n<font color=\"#505050\">-----------------------------------------------------------</font>\n<font color=\"#808080\">Yaari Inc., 358 Angier Ave NE Atlanta, GA 30312</font>\n<a href=\"http://yaari.com/?controller=privacy\">Privacy Policy</a> | <a href=\"http://yaari.com/?controller=absn&action=addoptout&[email protected]\">Unsubscribe</a> | <a href=\"http://yaari.com/?controller=termsofservice&action=index\">Terms of Service</a>\n\nYaariFTL554VBZ483SLB511BDO643\n\n\nRohan Pethkar wants you to join Yaari!\n\nIs Rohan your friend?\n\nYes, Rohan is my friend! No, Rohan isn't my friend.\n\nPlease respond or Rohan may think you said no :(\n\nThanks,\nThe Yaari Team\n\n-----------------------------------------------------------\nYaari Inc., 358 Angier Ave NE Atlanta, GA 30312\nPrivacy Policy | Unsubscribe | Terms of Service\n\nYaariFTL554VBZ483SLB511BDO643", "msg_date": "Fri, 8 May 2009 01:32:14 -0400", "msg_from": "Rohan Pethkar <[email protected]>", "msg_from_op": true, "msg_subject": "Rohan Pethkar sent you a Friend Request on Yaari" } ]
[ { "msg_contents": "\nI'm running a rather complex query and noticed a peculiarity in the usage \nof statistics that seriously affects the plan generated. I can extract the \nrelevant bit:\n\nmodmine-r9=# select * from pg_stats where tablename = 'geneflankingregion' AND attname IN ('distance', 'direction');\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n------------+--------------------+-----------+-----------+-----------+------------+----------------------------------+------------------------------------------------+------------------+-------------\n public | geneflankingregion | distance | 0 | 6 | 5 | {5.0kb,0.5kb,1.0kb,2.0kb,10.0kb} | {0.201051,0.200798,0.200479,0.199088,0.198583} | | 0.197736\n public | geneflankingregion | direction | 0 | 10 | 2 | {downstream,upstream} | {0.500719,0.499281} | | 0.495437\n(2 rows)\n\nmodmine-r9=# SELECT COUNT(*) FROM geneflankingregion;\n count\n--------\n 455020\n(1 row)\n\nmodmine-r9=# explain analyse SELECT * FROM geneflankingregion WHERE distance = '10.0kb' AND direction = 'upstream';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on geneflankingregion (cost=0.00..15507.30 rows=45115 width=213) (actual time=0.053..181.764 rows=45502 loops=1)\n Filter: ((distance = '10.0kb'::text) AND (direction = 'upstream'::text))\n Total runtime: 227.245 ms\n(3 rows)\n\nmodmine-r9=# explain analyse SELECT * FROM geneflankingregion WHERE LOWER(distance) = '10.0kb' AND LOWER(direction) = 'upstream';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on geneflankingregion\n (cost=66.95..88.77 rows=11 width=213)\n (actual time=207.555..357.359 rows=45502 loops=1)\n Recheck Cond: ((lower(distance) = '10.0kb'::text) AND (lower(direction) = 'upstream'::text))\n -> BitmapAnd\n (cost=66.95..66.95 rows=11 width=0)\n (actual time=205.978..205.978 rows=0 loops=1)\n -> Bitmap Index Scan on geneflankingregion__distance_equals\n (cost=0.00..31.34 rows=2275 width=0)\n (actual time=79.380..79.380 rows=91004 loops=1)\n Index Cond: (lower(distance) = '10.0kb'::text)\n -> Bitmap Index Scan on geneflankingregion__direction_equals\n (cost=0.00..35.35 rows=2275 width=0)\n (actual time=124.639..124.639 rows=227510 loops=1)\n Index Cond: (lower(direction) = 'upstream'::text)\n Total runtime: 401.740 ms\n(8 rows)\n\nWhen I wrap the fields in the constraints in a LOWER() function, the \nplanner stops looking at the statistics and makes a wild guess, even \nthough it is very obvious from just looking what the result should be. \nEmbedded in a much larger query, the inaccuracy in the number of rows (11 \ninstead of 45502) causes major planning problems. Also, why does the \nBitmapAnd say zero actual rows?\n\nI understand this probably isn't Priority No. 1, and there are some \ninteresting corner cases when n_distinct is higher than the histogram \nwidth, but would it be possible to fix this one up?\n\nMatthew\n\n-- \n I would like to think that in this day and age people would know better than\n to open executables in an e-mail. I'd also like to be able to flap my arms\n and fly to the moon. -- Tim Mullen\n", "msg_date": "Fri, 8 May 2009 14:46:22 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Statistics use with functions" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> When I wrap the fields in the constraints in a LOWER() function, the \n> planner stops looking at the statistics and makes a wild guess, even \n> though it is very obvious from just looking what the result should be. \n\nWell, in general the planner can't assume anything about the statistics\nof a function result, since it doesn't know how the function behaves.\nIn this case, however, you evidently have an index on lower(distance)\nwhich should have caused ANALYZE to gather stats on the values of that\nfunctional expression. It looks like there might be something wrong\nthere --- can you look into pg_stats and see if there is such an entry\nand if it looks sane?\n\n> Also, why does the BitmapAnd say zero actual rows?\n\nThere isn't any reasonably-inexpensive way for EXPLAIN ANALYZE to\ndetermine how many rows are represented by a bitmap result, so it\ndoesn't try.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 May 2009 11:48:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Statistics use with functions " }, { "msg_contents": "On Fri, 8 May 2009, Tom Lane wrote:\n> In this case, however, you evidently have an index on lower(distance)\n> which should have caused ANALYZE to gather stats on the values of that\n> functional expression. It looks like there might be something wrong\n> there --- can you look into pg_stats and see if there is such an entry\n> and if it looks sane?\n\nWhat should I be looking for? I don't see anything obvious from this:\n\nmodmine-r9=# select attname from pg_stats where tablename = 'geneflankingregion';\n\nAh, now I see it - I re-analysed, and found entries in pg_stats where \ntablename is the name of the index. Now the query plans correctly and has \nthe right estimates. So, one needs to analyse AFTER creating indexes - \ndidn't know that.\n\nmodmine-r9=# explain analyse SELECT * FROM geneflankingregion WHERE \nLOWER(distance) = '10.0kb' AND LOWER(direction) = 'upstream';\n QUERY PLAN\n-----------------------------------------------------------------\n Bitmap Heap Scan on geneflankingregion\n (cost=1197.19..11701.87 rows=45614 width=212)\n (actual time=18.336..153.825 rows=45502 loops=1)\n Recheck Cond: (lower(distance) = '10.0kb'::text)\n Filter: (lower(direction) = 'upstream'::text)\n -> Bitmap Index Scan on geneflankingregion__distance_equals\n (cost=0.00..1185.78 rows=91134 width=0)\n (actual time=16.565..16.565 rows=91004 loops=1)\n Index Cond: (lower(distance) = '10.0kb'::text)\n Total runtime: 199.282 ms\n(6 rows)\n\nMatthew\n\n-- \n It is better to keep your mouth closed and let people think you are a fool\n than to open it and remove all doubt. -- Mark Twain\n", "msg_date": "Fri, 8 May 2009 17:11:50 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Statistics use with functions " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Ah, now I see it - I re-analysed, and found entries in pg_stats where \n> tablename is the name of the index. Now the query plans correctly and has \n> the right estimates. So, one needs to analyse AFTER creating indexes - \n> didn't know that.\n\nYes, for functional indexes it's helpful to do that. Doesn't matter\nfor plain-old-plain-old indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 May 2009 12:16:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Statistics use with functions " } ]
[ { "msg_contents": "Hi all,\nrecently I came across a question from a customer of mine, asking me if \nit would feasible to run PostgreSQL along with PostGIS on embedded hardware.\nThey didn't give me complete information, but it should be some kind of \nindustrial PC with a 600MHz CPU. Memory should be not huge nor small, \nmaybe a couple of GBytes, hard disk should be some type of industrial \nCompact Flash of maybe 16 GBytes.\n\nThey are thinking about using this setup on-board of public buses and \ntrams, along with a GPS receiver, for self-localization. So that when \nthe bus or tram enters defined zones or passes near defined points, \nevents are triggered.\nThe database could probably be used completely read-only or almost that.\n\nWhat performances do you think would be possible for PostgreSQL+PostGIS \non such hardware???\n\nBye\nPaolo\n", "msg_date": "Fri, 08 May 2009 18:06:40 +0200", "msg_from": "Paolo Rizzi <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "On Fri, 2009-05-08 at 18:06 +0200, Paolo Rizzi wrote:\n> Hi all,\n> recently I came across a question from a customer of mine, asking me if \n> it would feasible to run PostgreSQL along with PostGIS on embedded hardware.\n> They didn't give me complete information, but it should be some kind of \n> industrial PC with a 600MHz CPU. Memory should be not huge nor small, \n> maybe a couple of GBytes, hard disk should be some type of industrial \n> Compact Flash of maybe 16 GBytes.\n> \n\nWell the CPU is slow the but rest isn't so bad.\n\n> They are thinking about using this setup on-board of public buses and \n> trams, along with a GPS receiver, for self-localization. So that when \n> the bus or tram enters defined zones or passes near defined points, \n> events are triggered.\n> The database could probably be used completely read-only or almost that.\n> \n> What performances do you think would be possible for PostgreSQL+PostGIS \n> on such hardware???\n> \n\nIf you aren't doing a lot of writing I don't see a huge barrier to this.\n\nSincerely,\n\nJoshua D. Drkae\n\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Fri, 08 May 2009 09:56:34 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "Joshua D. Drake ha scritto:\n> On Fri, 2009-05-08 at 18:06 +0200, Paolo Rizzi wrote:\n>> Hi all,\n>> recently I came across a question from a customer of mine, asking me if \n>> it would feasible to run PostgreSQL along with PostGIS on embedded hardware.\n>> They didn't give me complete information, but it should be some kind of \n>> industrial PC with a 600MHz CPU. Memory should be not huge nor small, \n>> maybe a couple of GBytes, hard disk should be some type of industrial \n>> Compact Flash of maybe 16 GBytes.\n>>\n> \n> Well the CPU is slow the but rest isn't so bad.\n> \n>> They are thinking about using this setup on-board of public buses and \n>> trams, along with a GPS receiver, for self-localization. So that when \n>> the bus or tram enters defined zones or passes near defined points, \n>> events are triggered.\n>> The database could probably be used completely read-only or almost that.\n>>\n>> What performances do you think would be possible for PostgreSQL+PostGIS \n>> on such hardware???\n>>\n> \n> If you aren't doing a lot of writing I don't see a huge barrier to this.\n> \n> Sincerely,\n> \n> Joshua D. Drkae\nThank you!!!\nIndeed I also think it could be done, but I searched the Web and found \nno previous experience of the like, so maybe it's just too weird putting \na spatial-enabled RDBMS on-board buses...!?!\n\nAnyway I found the TurnKey PostgreSQL appliance. It's a small \nUbuntu-based live-cd with PostgreSQL and PostGIS preconfigured.\nI could suggest these people to try it out on hardware similar to what \nthey intend to use, to have a feel of how it behaves.\n\nBye\nPaolo\n\n\n\n", "msg_date": "Fri, 08 May 2009 19:50:08 +0200", "msg_from": "Paolo Rizzi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: Paolo Rizzi\n> \n> Hi all,\n> recently I came across a question from a customer of mine, \n> asking me if it would feasible to run PostgreSQL along with \n> PostGIS on embedded hardware.\n> They didn't give me complete information, but it should be \n> some kind of industrial PC with a 600MHz CPU. Memory should \n> be not huge nor small, maybe a couple of GBytes, hard disk \n> should be some type of industrial Compact Flash of maybe 16 GBytes.\n> \n> They are thinking about using this setup on-board of public \n> buses and trams, along with a GPS receiver, for \n> self-localization. So that when the bus or tram enters \n> defined zones or passes near defined points, events are triggered.\n> The database could probably be used completely read-only or \n> almost that.\n> \n\nHi Paolo,\n\nI'm not really responding to your question. It happens that I collaborated\non a postgres/postgis based solution for public transportation and the\nmotive why you are trying to put the database in the embedded hardware is\npuzzling to me. In this solution we used a centralized PG database, the\ndevices in buses captured geographical position and other business related\ndata and fetched it by cellular network to the central server. \nCalculations on position where made on the server and related events where\nfetched back accordingly.\n\nIf possible, I would like to know what drives you to put a database on each\ndevice? You dont have a wireless link on each unit?\n\n\n> What performances do you think would be possible for \n> PostgreSQL+PostGIS on such hardware???\n\nWe never considered that solution so I couldn´t say.\n\n> \n> Bye\n> Paolo\n> \n\nRegards,\nFernando.\n\n", "msg_date": "Fri, 8 May 2009 16:04:33 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "> \n> \n>> -----Mensaje original-----\n>> De: Paolo Rizzi\n>>\n>> Hi all,\n>> recently I came across a question from a customer of mine, \n>> asking me if it would feasible to run PostgreSQL along with \n>> PostGIS on embedded hardware.\n>> They didn't give me complete information, but it should be \n>> some kind of industrial PC with a 600MHz CPU. Memory should \n>> be not huge nor small, maybe a couple of GBytes, hard disk \n>> should be some type of industrial Compact Flash of maybe 16 GBytes.\n>>\n>> They are thinking about using this setup on-board of public \n>> buses and trams, along with a GPS receiver, for \n>> self-localization. So that when the bus or tram enters \n>> defined zones or passes near defined points, events are triggered.\n>> The database could probably be used completely read-only or \n>> almost that.\n>>\n> \n> Hi Paolo,\n> \n> I'm not really responding to your question. It happens that I collaborated\n> on a postgres/postgis based solution for public transportation and the\n> motive why you are trying to put the database in the embedded hardware is\n> puzzling to me. In this solution we used a centralized PG database, the\n> devices in buses captured geographical position and other business related\n> data and fetched it by cellular network to the central server. \n> Calculations on position where made on the server and related events where\n> fetched back accordingly.\n> \n> If possible, I would like to know what drives you to put a database on each\n> device? You dont have a wireless link on each unit?\nIndeed I was as puzzled as you when they described me their idea, but I \nthink it makes sense. The buses and trams have to be independent of the \nradio link because there are certain operations that have to performed \nat the right moment in the right place (like oiling wheels or letting \ndown sand or salt or some other action).\nHowever they _are_ going to use a centralized server, and putting the \nsame technology (PostgreSQL/PostGIS) both on-board and on-center, would \nlet them simplify development, configuration and maintenance.\nNow that hardware is continuously getting cheaper and more powerful, \nmoving \"intelligence\" on-board may be a smart move...\n\n> \n> \n>> What performances do you think would be possible for \n>> PostgreSQL+PostGIS on such hardware???\n> \n> We never considered that solution so I couldn´t say.\nIn fact I searched the Web and found nobody that did that before :-)\n\n> \n>> Bye\n>> Paolo\n>>\n> \n> Regards,\n> Fernando.\n\nBye\nPaolo\n\n\n\n", "msg_date": "Sat, 09 May 2009 01:10:18 +0200", "msg_from": "Paolo Rizzi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "\n> They didn't give me complete information, but it should be some kind of \n> industrial PC with a 600MHz CPU. Memory should be not huge nor small, \n> maybe a couple of GBytes, hard disk should be some type of industrial \n> Compact Flash of maybe 16 GBytes.\n\n\tIt should work perfectly OK.\n\n\tRemember that you need a fast CPU if you have a database server that \nprocesses many queries from many users simultaneously.\n\tSince your \"server\" will process very few queries (maybe one per second, \nsomething like that) even a slow (by modern standards) 600 MHz CPU will be \nmore than enough...\n\tI'd say for such an application, your hardware is way overkill (it would \nwork on a smartphone...) but since hardware is so cheap...\n", "msg_date": "Sat, 09 May 2009 01:50:41 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "> \n>> They didn't give me complete information, but it should be some kind \n>> of industrial PC with a 600MHz CPU. Memory should be not huge nor \n>> small, maybe a couple of GBytes, hard disk should be some type of \n>> industrial Compact Flash of maybe 16 GBytes.\n> \n> It should work perfectly OK.\n> \n> Remember that you need a fast CPU if you have a database server that \n> processes many queries from many users simultaneously.\n> Since your \"server\" will process very few queries (maybe one per \n> second, something like that) even a slow (by modern standards) 600 MHz \n> CPU will be more than enough...\n> I'd say for such an application, your hardware is way overkill (it \n> would work on a smartphone...) but since hardware is so cheap...\nA smartphone... you're right, I didn't think of that, but the hardware I \ndescribed is very much like the one of a modern smartphone!!!\nAre you saying that PostgreSQL+PostGIS can actually run on a \nsmartphone??? Intriguing...\nDid anyone ever actually tried that???\n\nBye\nPaolo\n", "msg_date": "Sat, 09 May 2009 02:02:20 +0200", "msg_from": "Paolo Rizzi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "Paolo Rizzi <[email protected]> writes:\n> Are you saying that PostgreSQL+PostGIS can actually run on a \n> smartphone??? Intriguing...\n> Did anyone ever actually tried that???\n\nIf it's a supported CPU type and you've got a suitable build toolchain,\nsure. Seven or eight years ago we were getting a good laugh out of the\nfact that you could run PG on a PlayStation 2.\n\nThe real issue with the kind of hardware you're describing is going to\nbe the finite write lifetime of a flash device. For a low-update\napplication it'll probably be okay, but PG could very easily destroy a\nflash in no time if you aren't careful to minimize updates.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 May 2009 20:17:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware " }, { "msg_contents": ">> Are you saying that PostgreSQL+PostGIS can actually run on a \n>> smartphone??? Intriguing...\n>> Did anyone ever actually tried that???\n> \n> If it's a supported CPU type and you've got a suitable build toolchain,\n> sure. Seven or eight years ago we were getting a good laugh out of the\n> fact that you could run PG on a PlayStation 2.\nGood to know!!! I imagine that on a PS3 it would be _really_ fast... :-)\n\n> \n> The real issue with the kind of hardware you're describing is going to\n> be the finite write lifetime of a flash device. For a low-update\n> application it'll probably be okay, but PG could very easily destroy a\n> flash in no time if you aren't careful to minimize updates.\nThis is something I thought about too, but it's something that those \npeople (this client of mine) should be well aware of, anyway I'll point \nit out for them.\n\nAnyway it seems interesting the fact that newer Flashes use several \ntechniques, such as wear leveling, to spread writes across the least \nused cells. But this leads to files physical fragmentation, and it may \nbe a case where sequential scans are actually slower than random ones!!!\n\n> \n> \t\t\tregards, tom lane\n\nBye\nPaolo\n\n", "msg_date": "Mon, 11 May 2009 13:41:50 +0200", "msg_from": "Paolo Rizzi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "\n> A smartphone... you're right, I didn't think of that, but the hardware I \n> described is very much like the one of a modern smartphone!!!\n> Are you saying that PostgreSQL+PostGIS can actually run on a \n> smartphone??? Intriguing...\n> Did anyone ever actually tried that???\n\n\tWhile the performance of ARM cpus used in smartphones, PDAs, etc, is \npretty good, this hardware is optimized for small size and low power use, \nthus you generally get quite low memory bandwidth, the problem of Flash \nendurance, and lack of standard interfaces to hook up to the rest of your \nsystem.\n\tEmbedded PC-Compatible hardware in the 600 MHz range you mention would \nprobably get a DIMM memory module (maybe for the only reason that \nmass-production makes them so cheap) so you'd get a much higher memory \nbandwidth, and much larger RAM. Even if the CPU is only 2x faster than a \nsmartphone, if the memory bandwidth is 10x higher, you'll see the \ndifference. It would also have standard interfaces, very useful for you, \nand you can hook it up to a real SSD (not a micro-SD card) with real flash \nwear leveling algorithms.\n\n\tBut yeah since today's smartphones are more powerful that the desktops of \n10 years ago (which ran PG just fine) it would probably work, if you can \nrun Linux on it...\n", "msg_date": "Mon, 11 May 2009 14:48:51 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "Paolo Rizzi wrote:\n>>> Are you saying that PostgreSQL+PostGIS can actually run on a \n>>> smartphone??? Intriguing...\n>>> Did anyone ever actually tried that???\n>>\n>> If it's a supported CPU type and you've got a suitable build toolchain,\n>> sure. Seven or eight years ago we were getting a good laugh out of the\n>> fact that you could run PG on a PlayStation 2.\n> Good to know!!! I imagine that on a PS3 it would be _really_ fast... :-)\n\nwell not really - while it is fairly easy to get postgresql running on a \nPS3 it is not a fast platform. While the main CPU there is a pretty fast \nPower based core it only has 256MB of Ram and a single SATA disk \navailable(though you could add some USB disks).\n\n\nStefan\n", "msg_date": "Mon, 11 May 2009 18:05:25 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "On Mon, May 11, 2009 at 5:05 PM, Stefan Kaltenbrunner\n<[email protected]> wrote:\n>> Good to know!!! I imagine that on a PS3 it would be _really_ fast... :-)\n>\n> well not really - while it is fairly easy to get postgresql running on a PS3\n> it is not a fast platform. While the main CPU there is a pretty fast Power\n> based core it only has 256MB of Ram and a single SATA disk available(though\n> you could add some USB disks).\n\nThe nice thing about it is that TPC-C and other benchmarks all specify\ntheir bottom-line number in some unit like Transaction per second PER\nDOLLAR. So using a PS3 should be able to get ridiculously good results\ncompared to expensive server hardware...\n\n-- \ngreg\n", "msg_date": "Wed, 13 May 2009 14:53:13 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" }, { "msg_contents": "Greg Stark wrote:\n> On Mon, May 11, 2009 at 5:05 PM, Stefan Kaltenbrunner\n> <[email protected]> wrote:\n>>> Good to know!!! I imagine that on a PS3 it would be _really_ fast... :-)\n>> well not really - while it is fairly easy to get postgresql running on a PS3\n>> it is not a fast platform. While the main CPU there is a pretty fast Power\n>> based core it only has 256MB of Ram and a single SATA disk available(though\n>> you could add some USB disks).\n> \n> The nice thing about it is that TPC-C and other benchmarks all specify\n> their bottom-line number in some unit like Transaction per second PER\n> DOLLAR. So using a PS3 should be able to get ridiculously good results\n> compared to expensive server hardware...\n\nI kinda doubt that - the PS3 is certainly not server grade hardware so \nyou can only compare it to a desktop and I would bet that the typical \ndesktop you get for the 400€(you can get >4GB RAM a quadcore CPU for \nthat) price of a PS3 is going to outperform it significantly for almost \nevery workload...\n\n\nStefan\n\n", "msg_date": "Wed, 13 May 2009 16:03:28 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL with PostGIS on embedded hardware" } ]
[ { "msg_contents": "Rohan Pethkar wants you to join Yaari!\n\nIs Rohan your friend?\n\n<a href=\"http://yaari.com/?controller=user&action=mailregister&friend=1&sign=YaariFTL554VBZ483SLB511BDO643\">Yes, Rohan is my friend!</a> <a href=\"http://yaari.com/?controller=user&action=mailregister&friend=0&sign=YaariFTL554VBZ483SLB511BDO643\">No, Rohan isn't my friend.</a>\n\nPlease respond or Rohan may think you said no :(\n\nThanks,\nThe Yaari Team\n<font color=\"#505050\">-----------------------------------------------------------</font>\n<font color=\"#808080\">Yaari Inc., 358 Angier Ave NE Atlanta, GA 30312</font>\n<a href=\"http://yaari.com/?controller=privacy\">Privacy Policy</a> | <a href=\"http://yaari.com/?controller=absn&action=addoptout&[email protected]\">Unsubscribe</a> | <a href=\"http://yaari.com/?controller=termsofservice&action=index\">Terms of Service</a>\n\nYaariFTL554VBZ483SLB511BDO643\n\n\nRohan Pethkar wants you to join Yaari!\n\nIs Rohan your friend?\n\nYes, Rohan is my friend! No, Rohan isn't my friend.\n\nPlease respond or Rohan may think you said no :(\n\nThanks,\nThe Yaari Team\n-----------------------------------------------------------\nYaari Inc., 358 Angier Ave NE Atlanta, GA 30312\nPrivacy Policy | Unsubscribe | Terms of Service\n\nYaariFTL554VBZ483SLB511BDO643", "msg_date": "Mon, 11 May 2009 01:47:09 -0400", "msg_from": "Rohan Pethkar <[email protected]>", "msg_from_op": true, "msg_subject": "Reminder: Please Respond to Rohan's Invitation" } ]
[ { "msg_contents": "Hi,\n\nwhat may you suggest as the most optimal postgresql.conf to keep\nwriting as stable as possible?..\n\nWhat I want is to avoid \"throughput waves\" - I want to keep my\nresponse times stable without any activity holes. I've tried to reduce\ncheckpoint timeout from 5min to 30sec - it helped, throughput is more\nstable now, but instead of big waves I have now short waves anyway..\n\nWhat is the best options combination here?..\n\nRgds,\n-Dimitri\n", "msg_date": "Mon, 11 May 2009 18:15:13 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "What is the most optimal config parameters to keep stable write TPS\n\t?.." }, { "msg_contents": "Dimitri <[email protected]> wrote: \n \n> what may you suggest as the most optimal postgresql.conf to keep\n> writing as stable as possible?..\n> \n> What I want is to avoid \"throughput waves\" - I want to keep my\n> response times stable without any activity holes. I've tried to\n> reduce checkpoint timeout from 5min to 30sec - it helped, throughput\n> is more stable now, but instead of big waves I have now short waves\n> anyway..\n> \n> What is the best options combination here?..\n \nWhat version of PostgreSQL? What operating system? What hardware?\n \nThe answers are going to depend on the answers to those questions.\n \nIt would also be good to show all lines from postgresql.conf which are\nnot commented out.\n \n-Kevin\n", "msg_date": "Mon, 11 May 2009 11:19:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the most optimal config parameters to\n\tkeep stable write TPS ?.." }, { "msg_contents": "Hi Kevin,\n\nPostgreSQL: 8.3.7 & 8.4\nServer: Sun M5000 32cores\nOS: Solaris 10\n\ncurrent postgresql.conf:\n\n#================================\nmax_connections = 2000 # (change requires restart)\neffective_cache_size = 48000MB\nshared_buffers = 12000MB\ntemp_buffers = 200MB\nwork_mem = 100MB # min 64kB\nmaintenance_work_mem = 600MB # min 1MB\n\nmax_fsm_pages = 2048000\nfsync = on # turns forced synchronization on or off\nsynchronous_commit = off # immediate fsync at commit\nwal_sync_method = fdatasync\nwal_buffers = 2MB\nwal_writer_delay = 400ms # 1-10000 milliseconds\n\ncheckpoint_segments = 128\ncheckpoint_timeout = 30s\n\narchive_mode = off\ntrack_counts = on\nautovacuum = on\nlog_autovacuum_min_duration = 0\nautovacuum_max_workers = 4\nautovacuum_naptime = 20 # time between autovacuum runs\nautovacuum_vacuum_threshold = 50\nautovacuum_analyze_threshold = 50\nautovacuum_vacuum_scale_factor = 0.001\n\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\n\n#================================\n\nRgds,\n-Dimitri\n\n\nOn 5/11/09, Kevin Grittner <[email protected]> wrote:\n> Dimitri <[email protected]> wrote:\n>\n>> what may you suggest as the most optimal postgresql.conf to keep\n>> writing as stable as possible?..\n>>\n>> What I want is to avoid \"throughput waves\" - I want to keep my\n>> response times stable without any activity holes. I've tried to\n>> reduce checkpoint timeout from 5min to 30sec - it helped, throughput\n>> is more stable now, but instead of big waves I have now short waves\n>> anyway..\n>>\n>> What is the best options combination here?..\n>\n> What version of PostgreSQL? What operating system? What hardware?\n>\n> The answers are going to depend on the answers to those questions.\n>\n> It would also be good to show all lines from postgresql.conf which are\n> not commented out.\n>\n> -Kevin\n>\n", "msg_date": "Mon, 11 May 2009 18:31:58 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "Dimitri <[email protected]> wrote: \n \n> PostgreSQL: 8.3.7 & 8.4\n> Server: Sun M5000 32cores\n> OS: Solaris 10\n \nDoes that have a battery backed RAID controller? If so, is it\nconfigured for write-back? These both help a lot with smoothing\ncheckpoint I/O gluts.\n \nWe've minimized problems by making the background writer more\naggressive. 8.3 and later does a better job in general, but we've\nstill had to go with:\n \nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4.0\n \n> shared_buffers = 12000MB\n \nYou might want to test with that set to something much lower, to see\nwhat the checkpoint delays look like. We've found it best to use a\nsmall (256MB) setting, and leave caching to the OS; in our\nenvironment, it seems to do a better job of scheduling the disk I/O. \nYMMV, of course.\n \n-Kevin\n", "msg_date": "Mon, 11 May 2009 11:45:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the most optimal config parameters to\n\tkeep stable write TPS ?.." }, { "msg_contents": "Thanks a lot, I'll try them all!\n\nYes, I have a good external storage box with battery backed cache enabled.\nThere are 64GB of RAM so I expected it'll help little bit to increase\na buffer cache, but ok, will see if with 256MB it'll be better :-)\n\nWhat about \"full_page_writes\" ? seems it's \"on\" by default. Does it\nmakes sense to put if off?..\n\nRgds,\n-Dimitri\n\n\n\n\nOn 5/11/09, Kevin Grittner <[email protected]> wrote:\n> Dimitri <[email protected]> wrote:\n>\n>> PostgreSQL: 8.3.7 & 8.4\n>> Server: Sun M5000 32cores\n>> OS: Solaris 10\n>\n> Does that have a battery backed RAID controller? If so, is it\n> configured for write-back? These both help a lot with smoothing\n> checkpoint I/O gluts.\n>\n> We've minimized problems by making the background writer more\n> aggressive. 8.3 and later does a better job in general, but we've\n> still had to go with:\n>\n> bgwriter_lru_maxpages = 1000\n> bgwriter_lru_multiplier = 4.0\n>\n>> shared_buffers = 12000MB\n>\n> You might want to test with that set to something much lower, to see\n> what the checkpoint delays look like. We've found it best to use a\n> small (256MB) setting, and leave caching to the OS; in our\n> environment, it seems to do a better job of scheduling the disk I/O.\n> YMMV, of course.\n>\n> -Kevin\n>\n", "msg_date": "Mon, 11 May 2009 19:02:56 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "Dimitri <[email protected]> wrote:\n \n> What about \"full_page_writes\" ? seems it's \"on\" by default. Does it\n> makes sense to put if off?..\n \nIt would probably help with performance, but the description is a\nlittle disconcerting in terms of crash recovery. We tried running\nwith it off for a while (a year or so back), but had problems with\ncorruption. I think the specific cause of that has since been fixed,\nit's left us a bit leery of the option.\n \nMaybe someone else can speak to how safe (or not) the current\nimplementation of that option is.\n \n-Kevin\n", "msg_date": "Mon, 11 May 2009 12:13:51 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the most optimal config parameters to\n\tkeep stable write TPS ?.." }, { "msg_contents": "On Mon, May 11, 2009 at 10:31 AM, Dimitri <[email protected]> wrote:\n> Hi Kevin,\n>\n> PostgreSQL: 8.3.7 & 8.4\n> Server: Sun M5000 32cores\n> OS: Solaris 10\n>\n> current postgresql.conf:\n>\n> #================================\n> max_connections = 2000                  # (change requires restart)\n> effective_cache_size = 48000MB\n> shared_buffers = 12000MB\n> temp_buffers = 200MB\n> work_mem = 100MB                                # min 64kB\n> maintenance_work_mem = 600MB            # min 1MB\n>\n> max_fsm_pages = 2048000\n> fsync = on                              # turns forced synchronization on or off\n> synchronous_commit = off                # immediate fsync at commit\n> wal_sync_method = fdatasync\n> wal_buffers = 2MB\n> wal_writer_delay = 400ms                # 1-10000 milliseconds\n>\n> checkpoint_segments = 128\n> checkpoint_timeout = 30s\n\nWhat's your checkpoint completion target set to? Crank that up a bit\not 0.7, 0.8 etc and make the timeout more, not less. That should\nhelp.\n\nAlso, look into better hardware (RAID controller with battery backed\ncache) and also putting pg_xlog on a separate RAID-1 set (or RAID-10\nset if you've got a lot of drives under the postgres data set).\n", "msg_date": "Mon, 11 May 2009 11:15:51 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "OK, it'll be better to avoid a such improvement :-)\nPerformance - yes, but not for any price :-)\n\nThank you!\n\nRgds,\n-Dimitri\n\nOn 5/11/09, Kevin Grittner <[email protected]> wrote:\n> Dimitri <[email protected]> wrote:\n>\n>> What about \"full_page_writes\" ? seems it's \"on\" by default. Does it\n>> makes sense to put if off?..\n>\n> It would probably help with performance, but the description is a\n> little disconcerting in terms of crash recovery. We tried running\n> with it off for a while (a year or so back), but had problems with\n> corruption. I think the specific cause of that has since been fixed,\n> it's left us a bit leery of the option.\n>\n> Maybe someone else can speak to how safe (or not) the current\n> implementation of that option is.\n>\n> -Kevin\n>\n", "msg_date": "Mon, 11 May 2009 19:31:33 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "Hi Scott,\n\ngood point - the current checkpoint completion target is a default\n0.5, and it makes sense to set it to 0.8 to make writing more smooth,\ngreat!\n\nyes, data and xlog are separated, each one is sitting on an\nindependent storage LUN RAID1, and storage box is enough performant\n\nThank you!\n\nRgds,\n-Dimitri\n\n\nOn 5/11/09, Scott Marlowe <[email protected]> wrote:\n> On Mon, May 11, 2009 at 10:31 AM, Dimitri <[email protected]> wrote:\n>> Hi Kevin,\n>>\n>> PostgreSQL: 8.3.7 & 8.4\n>> Server: Sun M5000 32cores\n>> OS: Solaris 10\n>>\n>> current postgresql.conf:\n>>\n>> #================================\n>> max_connections = 2000 # (change requires restart)\n>> effective_cache_size = 48000MB\n>> shared_buffers = 12000MB\n>> temp_buffers = 200MB\n>> work_mem = 100MB # min 64kB\n>> maintenance_work_mem = 600MB # min 1MB\n>>\n>> max_fsm_pages = 2048000\n>> fsync = on # turns forced synchronization on\n>> or off\n>> synchronous_commit = off # immediate fsync at commit\n>> wal_sync_method = fdatasync\n>> wal_buffers = 2MB\n>> wal_writer_delay = 400ms # 1-10000 milliseconds\n>>\n>> checkpoint_segments = 128\n>> checkpoint_timeout = 30s\n>\n> What's your checkpoint completion target set to? Crank that up a bit\n> ot 0.7, 0.8 etc and make the timeout more, not less. That should\n> help.\n>\n> Also, look into better hardware (RAID controller with battery backed\n> cache) and also putting pg_xlog on a separate RAID-1 set (or RAID-10\n> set if you've got a lot of drives under the postgres data set).\n>\n", "msg_date": "Mon, 11 May 2009 19:36:39 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "On Mon, 11 May 2009, Dimitri wrote:\n\n> I've tried to reduce checkpoint timeout from 5min to 30sec - it helped, \n> throughput is more stable now, but instead of big waves I have now short \n> waves anyway..\n\nTuning for very tiny checkpoints all of the time is one approach here. \nThe other is to push up checkpoint_segments (done in your case), \ncheckpoint_timeout, and checkpoint_completion_target to as high as you \ncan, in order to spread the checkpoint period over as much time as \npossible. Reducing shared_buffers can also help in both cases, you've set \nthat to an extremely high value.\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm is a \nlong discussion of just this topic, if you saw a serious change by \nadjusting checkpoint_timeout than further experimentation in this area is \nlikely to help you out.\n\nYou might also want to look at the filesystem parameters you're using \nunder Solaris. ZFS in particular can cache more writes than you may \nexpect, which can lead to that all getting pushed out at the very end of \ncheckpoint time. That may very well be the source of your \"waves\", on a \nsystem with 64GB of RAM for all we know *every* write you're doing between \ncheckpoints is being buffered until the fsyncs at the checkpoint end. \nThere were a couple of sessions at PG East last year that mentioned this \narea, I put a summary of suggestions and links to more detail at \nhttp://notemagnet.blogspot.com/2008/04/conference-east-08-and-solaris-notes.html\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 11 May 2009 22:15:32 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "On Mon, May 11, 2009 at 8:15 PM, Greg Smith <[email protected]> wrote:\n\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm is a long\n> discussion of just this topic, if you saw a serious change by adjusting\n> checkpoint_timeout than further experimentation in this area is likely to\n> help you out.\n\nI highly recommend your article on the background writer. Reading the\none on tuning the 8.2 bgw allowed me to make some changes to the\nproduction servers at my last job that made a huge difference in\nsustained tps on a logging server\n", "msg_date": "Mon, 11 May 2009 20:38:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "On Mon, May 11, 2009 at 6:31 PM, Dimitri <[email protected]> wrote:\n> Hi Kevin,\n>\n> PostgreSQL: 8.3.7 & 8.4\n> Server: Sun M5000 32cores\n> OS: Solaris 10\n>\n> current postgresql.conf:\n>\n> #================================\n> max_connections = 2000                  # (change requires restart)\n\nAre you sure about the 2000 connections ?\nWhy don't you use a pgbouncer or pgpool instead ?\n\n\n-- \nF4FQM\nKerunix Flan\nLaurent Laborde\n", "msg_date": "Tue, 12 May 2009 11:23:46 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "It's just one of the test conditions - \"what if there 2000 users?\" - I\nknow I may use pgpool or others, but I also need to know the limits of\nthe database engine itself.. For the moment I'm limiting to 256\nconcurrent sessions, but config params are kept like for 2000 :-)\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Laurent Laborde <[email protected]> wrote:\n> On Mon, May 11, 2009 at 6:31 PM, Dimitri <[email protected]> wrote:\n>> Hi Kevin,\n>>\n>> PostgreSQL: 8.3.7 & 8.4\n>> Server: Sun M5000 32cores\n>> OS: Solaris 10\n>>\n>> current postgresql.conf:\n>>\n>> #================================\n>> max_connections = 2000                  # (change requires restart)\n>\n> Are you sure about the 2000 connections ?\n> Why don't you use a pgbouncer or pgpool instead ?\n>\n>\n> --\n> F4FQM\n> Kerunix Flan\n> Laurent Laborde\n>\n", "msg_date": "Tue, 12 May 2009 11:36:36 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." }, { "msg_contents": "Hi\n\n>>>>> \"D\" == Dimitri <[email protected]> writes:\n\nD> current postgresql.conf:\n\nD> #================================\nD> max_connections = 2000 # (change requires restart)\nD> temp_buffers = 200MB\n\ntemp_buffers are kept per connection and not freed until the session\nends. If you use some kind of connection pooling this can eat up a lot\nof ram that could be used for caching instead.\n\nRegards,\nJulian\n", "msg_date": "Tue, 12 May 2009 15:15:23 +0200", "msg_from": "[email protected] (Julian v. Bock)", "msg_from_op": false, "msg_subject": "Re: What is the most optimal config parameters to keep stable write\n\tTPS ?.." }, { "msg_contents": "Good point! I missed it.. - will 20MB be enough?\n\nRgds,\n-Dimitri\n\nOn 5/12/09, Julian v. Bock <[email protected]> wrote:\n> Hi\n>\n>>>>>> \"D\" == Dimitri <[email protected]> writes:\n>\n> D> current postgresql.conf:\n>\n> D> #================================\n> D> max_connections = 2000 # (change requires restart)\n> D> temp_buffers = 200MB\n>\n> temp_buffers are kept per connection and not freed until the session\n> ends. If you use some kind of connection pooling this can eat up a lot\n> of ram that could be used for caching instead.\n>\n> Regards,\n> Julian\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 12 May 2009 17:41:14 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the most optimal config parameters to keep\n\tstable write TPS ?.." } ]
[ { "msg_contents": "I'm running version 8.1.11 on SLES 10 SP2. I'm trying to improve this \nquery and unfortunately I cannot change the application. For some \nreason the planner is making a bad decision sometimes after an analyze \nof table objectcustomfieldvalues. \n\n\nThe query is:\nSELECT DISTINCT main.* FROM Tickets main JOIN CustomFields \nCustomFields_1 ON ( CustomFields_1.Name = 'QA Origin' ) JOIN \nCustomFields CustomFields_3 ON (CustomFields_3.Name = 'QA Group Code' ) \nJOIN ObjectCustomFieldValues ObjectCustomFieldValues_4 ON \n(ObjectCustomFieldValues_4.ObjectId = main.id ) AND ( \nObjectCustomFieldValues_4.Disabled = '0' ) AND \n(ObjectCustomFieldValues_4.ObjectType = 'RT::Ticket' ) AND ( \nObjectCustomFieldValues_4.CustomField = CustomFields_3.id ) JOIN \nObjectCustomFieldValues ObjectCustomFieldValues_2 ON ( \nObjectCustomFieldValues_2.Disabled = '0' ) AND \n(ObjectCustomFieldValues_2.ObjectId = main.id ) AND ( \nObjectCustomFieldValues_2.CustomField = CustomFields_1.id ) AND \n(ObjectCustomFieldValues_2.ObjectType = 'RT::Ticket' ) WHERE \n(main.Status != 'deleted') AND (main.Queue = '60' AND \nObjectCustomFieldValues_2.Content LIKE '%Patient Sat Survey%' AND \nObjectCustomFieldValues_4.Content LIKE'%MOT%') AND (main.EffectiveId = \nmain.id) AND (main.Type = 'ticket') ORDER BY main.id ASC;\n\n\nHere is the query run in 12379.816 ms:\n\n Unique (cost=1560.06..1560.12 rows=1 width=181) (actual \ntime=12379.573..12379.642 rows=13 loops=1)\n -> Sort (cost=1560.06..1560.06 rows=1 width=181) (actual \ntime=12379.568..12379.586 rows=13 loops=1)\n Sort Key: main.id, main.effectiveid, main.queue, main.\"type\", \nmain.issuestatement, main.resolution, main.\"owner\", main.subject, \nmain.initialpriority, main.finalpriority, main.priority, \nmain.timeestimated, main.timeworked, main.status, main.timeleft, \nmain.told, main.starts, main.started, main.due, main.resolved, \nmain.lastupdatedby, main.lastupdated, main.creator, main.created, \nmain.disabled\n -> Nested Loop (cost=0.00..1560.05 rows=1 width=181) (actual \ntime=9081.782..12379.303 rows=13 loops=1)\n Join Filter: (\"outer\".objectid = \"inner\".id)\n -> Nested Loop (cost=0.00..849.90 rows=1 width=8) \n(actual time=9059.881..12052.548 rows=13 loops=1)\n Join Filter: (\"outer\".objectid = \"inner\".objectid)\n -> Nested Loop (cost=0.00..424.19 rows=1 width=4) \n(actual time=0.274..26.660 rows=1575 loops=1)\n -> Index Scan using customfields_pkey on \ncustomfields customfields_1 (cost=0.00..16.41 rows=1 width=4) (actual \ntime=0.228..0.371 rows=1 loops=1)\n Filter: ((name)::text = 'QA Origin'::text)\n -> Index Scan using ticketcustomfieldvalues2 \non objectcustomfieldvalues objectcustomfieldvalues_2 (cost=0.00..407.76 \nrows=1 width=8) (actual time=0.039..21.243 rows=1575 loops=1)\n Index Cond: \n(objectcustomfieldvalues_2.customfield = \"outer\".id)\n Filter: ((disabled = 0) AND \n((objecttype)::text = 'RT::Ticket'::text) AND ((content)::text ~~ \n'%Patient Sat Survey%'::text))\n -> Nested Loop (cost=0.00..424.99 rows=58 \nwidth=4) (actual time=5.188..7.605 rows=18 loops=1575)\n -> Index Scan using customfields_pkey on \ncustomfields customfields_3 (cost=0.00..16.41 rows=1 width=4) (actual \ntime=0.235..0.419 rows=1 loops=1575)\n Filter: ((name)::text = 'QA Group \nCode'::text)\n -> Index Scan using ticketcustomfieldvalues2 \non objectcustomfieldvalues objectcustomfieldvalues_4 (cost=0.00..407.76 \nrows=65 width=8) (actual time=4.947..7.130 rows=18 loops=1575)\n Index Cond: \n(objectcustomfieldvalues_4.customfield = \"outer\".id)\n Filter: ((disabled = 0) AND \n((objecttype)::text = 'RT::Ticket'::text) AND ((content)::text ~~ \n'%MOT%'::text))\n -> Index Scan using tickets1 on tickets main \n(cost=0.00..709.77 rows=30 width=181) (actual time=0.020..17.104 \nrows=5743 loops=13)\n Index Cond: (queue = 60)\n Filter: (((status)::text <> 'deleted'::text) AND \n(effectiveid = id) AND ((\"type\")::text = 'ticket'::text))\n Total runtime: 12379.816 ms\n(23 rows)\n\n\nselect attname,n_distinct from pg_stats where tablename='tickets';\n attname | n_distinct\n-----------------+------------\n id | -1\n effectiveid | -0.968462\n queue | 37\n type | 1\n issuestatement | 1\n resolution | 1\n owner | 123\n subject | -0.885148\n initialpriority | 12\n finalpriority | 9\n priority | 43\n timeestimated | 5\n timeworked | 5\n status | 6\n timeleft | 3\n told | -0.128088\n starts | 60\n started | -0.862352\n due | 1270\n resolved | -0.94381\n lastupdatedby | 366\n lastupdated | -0.98511\n creator | 1054\n created | -0.965858\n disabled | 1\n(25 rows)\n\n\nselect attname,n_distinct from pg_stats where tablename='customfields';\n attname | n_distinct\n---------------+------------\n id | -1\n name | -0.83855\n type | 3\n description | -0.202636\n sortorder | -0.110379\n creator | 4\n created | -0.739703\n lastupdatedby | 4\n lastupdated | -0.78089\n disabled | 2\n lookuptype | 1\n repeated | 1\n pattern | 1\n maxvalues | 2\n(14 rows)\n\n\nselect attname,n_distinct from pg_stats where \ntablename='objectcustomfieldvalues';\n attname | n_distinct\n-----------------+------------\n id | -1\n objectid | 95048\n customfield | 543\n content | 30209\n creator | 115\n created | -0.268605\n lastupdatedby | 115\n lastupdated | -0.26863\n objecttype | 1\n largecontent | -1\n contenttype | 1\n contentencoding | 2\n sortorder | 1\n disabled | 2\n(14 rows)\n\n\n\nIf I now analyze objectcustomfieldvalues the query now takes 51747.341 ms:\n\n Unique (cost=1564.95..1565.02 rows=1 width=181) (actual \ntime=51747.087..51747.152 rows=13 loops=1)\n -> Sort (cost=1564.95..1564.95 rows=1 width=181) (actual \ntime=51747.083..51747.097 rows=13 loops=1)\n Sort Key: main.id, main.effectiveid, main.queue, main.\"type\", \nmain.issuestatement, main.resolution, main.\"owner\", main.subject, \nmain.initialpriority, main.finalpriority, main.priority, \nmain.timeestimated, main.timeworked, main.status, main.timeleft, \nmain.told, main.starts, main.started, main.due, main.resolved, \nmain.lastupdatedby, main.lastupdated, main.creator, main.created, \nmain.disabled\n -> Nested Loop (cost=0.00..1564.94 rows=1 width=181) (actual \ntime=38871.343..51746.920 rows=13 loops=1)\n Join Filter: (\"inner\".objectid = \"outer\".id)\n -> Nested Loop (cost=0.00..1136.77 rows=1 width=185) \n(actual time=7.772..39862.238 rows=1548 loops=1)\n Join Filter: (\"outer\".objectid = \"inner\".id)\n -> Nested Loop (cost=0.00..426.63 rows=1 width=4) \n(actual time=0.266..27.745 rows=1575 loops=1)\n -> Index Scan using customfields_pkey on \ncustomfields customfields_1 (cost=0.00..16.41 rows=1 width=4) (actual \ntime=0.219..0.404 rows=1 loops=1)\n Filter: ((name)::text = 'QA Origin'::text)\n -> Index Scan using ticketcustomfieldvalues2 \non objectcustomfieldvalues objectcustomfieldvalues_2 (cost=0.00..410.20 \nrows=1 width=8) (actual time=0.040..22.006 rows=1575 loops=1)\n Index Cond: \n(objectcustomfieldvalues_2.customfield = \"outer\".id)\n Filter: ((disabled = 0) AND \n((objecttype)::text = 'RT::Ticket'::text) AND ((content)::text ~~ \n'%Patient Sat Survey%'::text))\n -> Index Scan using tickets1 on tickets main \n(cost=0.00..709.77 rows=30 width=181) (actual time=0.015..17.164 \nrows=5743 loops=1575)\n Index Cond: (queue = 60)\n Filter: (((status)::text <> 'deleted'::text) \nAND (effectiveid = id) AND ((\"type\")::text = 'ticket'::text))\n -> Nested Loop (cost=0.00..427.44 rows=58 width=4) \n(actual time=5.207..7.646 rows=18 loops=1548)\n -> Index Scan using customfields_pkey on \ncustomfields customfields_3 (cost=0.00..16.41 rows=1 width=4) (actual \ntime=0.242..0.434 rows=1 loops=1548)\n Filter: ((name)::text = 'QA Group Code'::text)\n -> Index Scan using ticketcustomfieldvalues2 on \nobjectcustomfieldvalues objectcustomfieldvalues_4 (cost=0.00..410.20 \nrows=66 width=8) (actual time=4.958..7.154 rows=18 loops=1548)\n Index Cond: \n(objectcustomfieldvalues_4.customfield = \"outer\".id)\n Filter: ((disabled = 0) AND \n((objecttype)::text = 'RT::Ticket'::text) AND ((content)::text ~~ \n'%MOT%'::text))\n Total runtime: 51747.341 ms\n(23 rows)\n\n\nselect attname,n_distinct from pg_stats where \ntablename='objectcustomfieldvalues';\n attname | n_distinct\n-----------------+------------\n id | -1\n objectid | 95017\n customfield | 539\n content | 30287\n creator | 114\n created | -0.268403\n lastupdatedby | 114\n lastupdated | -0.268809\n objecttype | 1\n largecontent | -1\n contenttype | 1\n contentencoding | 2\n sortorder | 1\n disabled | 2\n(14 rows)\n\n\nEven better yet, if I turn off enable_nestloop the query runs in \n3499.970 ms:\n\n Unique (cost=53860.11..53860.18 rows=1 width=181) (actual \ntime=3499.614..3499.684 rows=13 loops=1)\n -> Sort (cost=53860.11..53860.11 rows=1 width=181) (actual \ntime=3499.608..3499.627 rows=13 loops=1)\n Sort Key: main.id, main.effectiveid, main.queue, main.\"type\", \nmain.issuestatement, main.resolution, main.\"owner\", main.subject, \nmain.initialpriority, main.finalpriority, main.priority, \nmain.timeestimated, main.timeworked, main.status, main.timeleft, \nmain.told, main.starts, main.started, main.due, main.resolved, \nmain.lastupdatedby, main.lastupdated, main.creator, main.created, \nmain.disabled\n -> Hash Join (cost=27240.41..53860.10 rows=1 width=181) \n(actual time=3429.166..3499.538 rows=13 loops=1)\n Hash Cond: (\"outer\".objectid = \"inner\".id)\n -> Merge Join (cost=0.00..26619.39 rows=58 width=4) \n(actual time=1666.503..1736.814 rows=18 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".customfield)\n -> Index Scan using customfields_pkey on \ncustomfields customfields_3 (cost=0.00..16.41 rows=1 width=4) (actual \ntime=0.221..0.410 rows=1 loops=1)\n Filter: ((name)::text = 'QA Group Code'::text)\n -> Index Scan using ticketcustomfieldvalues2 on \nobjectcustomfieldvalues objectcustomfieldvalues_4 (cost=0.00..26514.04 \nrows=35342 width=8) (actual time=98.035..1736.277 rows=44 loops=1)\n Filter: ((disabled = 0) AND \n((objecttype)::text = 'RT::Ticket'::text) AND ((content)::text ~~ \n'%MOT%'::text))\n -> Hash (cost=27240.40..27240.40 rows=1 width=185) \n(actual time=1762.572..1762.572 rows=1548 loops=1)\n -> Hash Join (cost=26530.47..27240.40 rows=1 \nwidth=185) (actual time=1728.887..1758.147 rows=1548 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".objectid)\n -> Index Scan using tickets1 on tickets \nmain (cost=0.00..709.77 rows=30 width=181) (actual time=0.024..17.550 \nrows=5743 loops=1)\n Index Cond: (queue = 60)\n Filter: (((status)::text <> \n'deleted'::text) AND (effectiveid = id) AND ((\"type\")::text = \n'ticket'::text))\n -> Hash (cost=26530.47..26530.47 rows=1 \nwidth=4) (actual time=1728.787..1728.787 rows=1575 loops=1)\n -> Merge Join (cost=0.00..26530.47 \nrows=1 width=4) (actual time=1493.343..1726.020 rows=1575 loops=1)\n Merge Cond: (\"outer\".id = \n\"inner\".customfield)\n -> Index Scan using \ncustomfields_pkey on customfields customfields_1 (cost=0.00..16.41 \nrows=1 width=4) (actual time=0.237..0.429 rows=1 loops=1)\n Filter: ((name)::text = 'QA \nOrigin'::text)\n -> Index Scan using \nticketcustomfieldvalues2 on objectcustomfieldvalues \nobjectcustomfieldvalues_2 (cost=0.00..26514.04 rows=1 width=8) (actual \ntime=1493.091..1721.155 rows=1575 loops=1)\n Filter: ((disabled = 0) AND \n((objecttype)::text = 'RT::Ticket'::text) AND ((content)::text ~~ \n'%Patient Sat Survey%'::text))\n Total runtime: 3499.970 ms\n(25 rows)\n\n\n\n\nIs there anything I can do to improve this?\n\n\n~Cory Coager\n\n\n\n------------------------------------------------------------------------\nThe information contained in this communication is intended\nonly for the use of the recipient(s) named above. It may\ncontain information that is privileged or confidential, and\nmay be protected by State and/or Federal Regulations. If\nthe reader of this message is not the intended recipient,\nyou are hereby notified that any dissemination,\ndistribution, or copying of this communication, or any of\nits contents, is strictly prohibited. If you have received\nthis communication in error, please return it to the sender\nimmediately and delete the original message and any copy\nof it from your computer system. If you have any questions\nconcerning this message, please contact the sender.\n------------------------------------------------------------------------\n\n", "msg_date": "Mon, 11 May 2009 17:03:15 -0400", "msg_from": "Cory Coager <[email protected]>", "msg_from_op": true, "msg_subject": "Query planner making bad decisions" }, { "msg_contents": "Cory Coager <[email protected]> writes:\n> Even better yet, if I turn off enable_nestloop the query runs in \n> 3499.970 ms:\n\nThe reason it prefers a nestloop inappropriately is a mistaken estimate\nthat some plan node is only going to yield a very small number of rows\n(like one or two --- there's not a hard cutoff, but usually more than\na couple of estimated rows will lead it away from a nestloop).\nIn this case the worst problem seems to be here:\n\n> -> Index Scan using \n> ticketcustomfieldvalues2 on objectcustomfieldvalues \n> objectcustomfieldvalues_2 (cost=0.00..26514.04 rows=1 width=8) (actual \n> time=1493.091..1721.155 rows=1575 loops=1)\n> Filter: ((disabled = 0) AND \n> ((objecttype)::text = 'RT::Ticket'::text) AND ((content)::text ~~ \n> '%Patient Sat Survey%'::text))\n\nwhere we're off by a factor of 1500+ :-(\n\nI think most likely the ~~ operator is the biggest problem.\nUnfortunately 8.1's estimator for ~~ is not terribly bright. You could\ntry increasing your statistics target but I don't think it will help\nmuch. Is there any chance of updating to 8.2 or later? 8.2 can do\nsignificantly better on this type of estimate as long as it has enough\nstats.\n\nIn any case I'd suggest raising default_statistics_target to 100 or so,\nas you seem to be running queries complex enough to need good stats.\nBut I'm not sure that that will be enough to fix the problem in 8.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 2009 19:02:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner making bad decisions " }, { "msg_contents": "Tom Lane said the following on 05/11/2009 07:02 PM:\n> where we're off by a factor of 1500+ :-(\n>\n> I think most likely the ~~ operator is the biggest problem.\n> Unfortunately 8.1's estimator for ~~ is not terribly bright. You could\n> try increasing your statistics target but I don't think it will help\n> much. Is there any chance of updating to 8.2 or later? 8.2 can do\n> significantly better on this type of estimate as long as it has enough\n> stats.\n>\n> In any case I'd suggest raising default_statistics_target to 100 or so,\n> as you seem to be running queries complex enough to need good stats.\n> But I'm not sure that that will be enough to fix the problem in 8.1.\n>\n> \t\t\tregards, tom lane\n> \nI should have mentioned the statistics for every column are already set \nto 1000. I guess we'll have to add an upgrade to the project list. \nThanks for the info.\n\n\n\n------------------------------------------------------------------------\nThe information contained in this communication is intended\nonly for the use of the recipient(s) named above. It may\ncontain information that is privileged or confidential, and\nmay be protected by State and/or Federal Regulations. If\nthe reader of this message is not the intended recipient,\nyou are hereby notified that any dissemination,\ndistribution, or copying of this communication, or any of\nits contents, is strictly prohibited. If you have received\nthis communication in error, please return it to the sender\nimmediately and delete the original message and any copy\nof it from your computer system. If you have any questions\nconcerning this message, please contact the sender.\n------------------------------------------------------------------------\n\n", "msg_date": "Tue, 12 May 2009 08:05:19 -0400", "msg_from": "Cory Coager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner making bad decisions" }, { "msg_contents": "Cory Coager wrote:\n> I'm running version 8.1.11 on SLES 10 SP2. I'm trying to improve this\n> query and unfortunately I cannot change the application. For some\n> reason the planner is making a bad decision sometimes after an analyze\n> of table objectcustomfieldvalues.\n> \n> The query is:\n> SELECT DISTINCT main.* FROM Tickets main JOIN CustomFields\n> CustomFields_1 ON ( CustomFields_1.Name = 'QA Origin' ) JOIN\n> CustomFields CustomFields_3 ON (CustomFields_3.Name = 'QA Group Code' )\n> JOIN ObjectCustomFieldValues ObjectCustomFieldValues_4 ON\n> (ObjectCustomFieldValues_4.ObjectId = main.id ) AND (\n> ObjectCustomFieldValues_4.Disabled = '0' ) AND\n> (ObjectCustomFieldValues_4.ObjectType = 'RT::Ticket' ) AND (\n> ObjectCustomFieldValues_4.CustomField = CustomFields_3.id ) JOIN\n> ObjectCustomFieldValues ObjectCustomFieldValues_2 ON (\n> ObjectCustomFieldValues_2.Disabled = '0' ) AND\n> (ObjectCustomFieldValues_2.ObjectId = main.id ) AND (\n> ObjectCustomFieldValues_2.CustomField = CustomFields_1.id ) AND\n> (ObjectCustomFieldValues_2.ObjectType = 'RT::Ticket' ) WHERE\n> (main.Status != 'deleted') AND (main.Queue = '60' AND\n> ObjectCustomFieldValues_2.Content LIKE '%Patient Sat Survey%' AND\n> ObjectCustomFieldValues_4.Content LIKE'%MOT%') AND (main.EffectiveId =\n> main.id) AND (main.Type = 'ticket') ORDER BY main.id ASC;\n> \n> \n\nHello\n\nJust in case you want this information. Our RT installation running on\n8.3.6 / RHEL4 and with default_statistics_target=100 gives us this query\nplan:\n\nUnique (cost=1360.05..1360.12 rows=1 width=161) (actual\ntime=2141.834..2141.834 rows=0 loops=1)\n -> Sort (cost=1360.05..1360.06 rows=1 width=161) (actual\ntime=2141.831..2141.831 rows=0 loops=1)\n Sort Key: main.effectiveid, main.issuestatement,\nmain.resolution, main.owner, main.subject, main.initialpriority,\nmain.finalpriority, main.priority, main.timeestimated, main.timeworked,\nmain.status, main.timeleft, main.told, main.starts, main.started,\nmain.due, main.resolved, main.lastupdatedby, main.lastupdated,\nmain.creator, main.created, main.disabled\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=14.14..1360.04 rows=1 width=161) (actual\ntime=2141.724..2141.724 rows=0 loops=1)\n -> Nested Loop (cost=14.14..1358.09 rows=1 width=165)\n(actual time=2141.717..2141.717 rows=0 loops=1)\n -> Nested Loop (cost=14.14..1356.14 rows=1\nwidth=169) (actual time=2141.715..2141.715 rows=0 loops=1)\n -> Nested Loop (cost=14.14..1348.69 rows=1\nwidth=169) (actual time=2141.711..2141.711 rows=0 loops=1)\n -> Bitmap Heap Scan on tickets main\n(cost=14.14..1333.78 rows=2 width=161) (actual time=0.906..26.413\nrows=1046 loops=1)\n Recheck Cond: (queue = 60)\n Filter: (((status)::text <>\n'deleted'::text) AND (effectiveid = id) AND ((type)::text = 'ticket'::text))\n -> Bitmap Index Scan on tickets1\n (cost=0.00..14.14 rows=781 width=0) (actual time=0.662..0.662 rows=1188\nloops=1)\n Index Cond: (queue = 60)\n -> Index Scan using\nobjectcustomfieldvalues3 on objectcustomfieldvalues\nobjectcustomfieldvalues_2 (cost=0.00..7.44 rows=1 width=8) (actual\ntime=2.017..2.017 rows=0 loops=1046)\n Index Cond:\n((objectcustomfieldvalues_2.disabled = 0) AND\n(objectcustomfieldvalues_2.objectid = main.effectiveid) AND\n((objectcustomfieldvalues_2.objecttype)::text = 'RT::Ticket'::text))\n Filter:\n((objectcustomfieldvalues_2.content)::text ~~ '%Patient Sat Survey%'::text)\n -> Index Scan using objectcustomfieldvalues3\non objectcustomfieldvalues objectcustomfieldvalues_4 (cost=0.00..7.44\nrows=1 width=8) (never executed)\n Index Cond:\n((objectcustomfieldvalues_4.disabled = 0) AND\n(objectcustomfieldvalues_4.objectid = main.effectiveid) AND\n((objectcustomfieldvalues_4.objecttype)::text = 'RT::Ticket'::text))\n Filter:\n((objectcustomfieldvalues_4.content)::text ~~ '%MOT%'::text)\n -> Index Scan using customfields_pkey on\ncustomfields customfields_3 (cost=0.00..1.94 rows=1 width=4) (never\nexecuted)\n Index Cond: (customfields_3.id =\nobjectcustomfieldvalues_4.customfield)\n Filter: ((customfields_3.name)::text = 'QA\nGroup Code'::text)\n -> Index Scan using customfields_pkey on customfields\ncustomfields_1 (cost=0.00..1.94 rows=1 width=4) (never executed)\n Index Cond: (customfields_1.id =\nobjectcustomfieldvalues_2.customfield)\n Filter: ((customfields_1.name)::text = 'QA\nOrigin'::text)\n Total runtime: 2142.347 ms\n(26 rows)\n\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Tue, 12 May 2009 23:25:02 +0200", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner making bad decisions" } ]
[ { "msg_contents": "I have the following table:\n\nCREATE TABLE \"temp\".tmp_135528\n(\n id integer NOT NULL,\n prid integer,\n group_id integer,\n iinv integer,\n oinv integer,\n isum numeric,\n osum numeric,\n idate timestamp without time zone,\n odate timestamp without time zone,\n CONSTRAINT t_135528_pk PRIMARY KEY (id)\n)\nWITH (OIDS=FALSE);\n\nWith index:\n\nCREATE INDEX t_135528\n ON \"temp\".tmp_135528\n USING btree\n (idate, group_id, osum, oinv);\n\nWhen the following query is executed the index is not used:\n\nEXPLAIN SELECT id, osum\nFROM temp.tmp_135528\nWHERE idate <= '2007-05-17 00:00:00'::timestamp\nAND group_id = '13'\nAND osum <= '19654.45328'\nAND oinv = -1\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tmp_135528 (cost=0.00..7022.36 rows=1166 width=11)\n Filter: ((idate <= '2007-05-17 00:00:00'::timestamp without time zone) AND \n(osum <= 19654.45328) AND (group_id = 13) AND (oinv = (-1)))\n(2 rows)\n\nWhen \n\"idate <= '2007-05-17 00:00:00'::timestamp\" \nis changed to \n\"idate >= '2007-05-17 00:00:00'::timestamp\" \nor\n\"idate = '2007-05-17 00:00:00'::timestamp\" \nthen the index is used:\n\nEXPLAIN SELECT id, osum\nFROM temp.tmp_135528\nWHERE idate >= '2007-05-17 00:00:00'::timestamp\nAND group_id = '13'\nAND osum <= '19654.45328'\nAND oinv = -1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_135528 on tmp_135528 (cost=0.00..462.61 rows=47 width=11)\n Index Cond: ((idate >= '2007-05-17 00:00:00'::timestamp without time zone) \nAND (group_id = 13) AND (osum <= 19654.45328) AND (oinv = (-1)))\n(2 rows)\n\nWhy I cannot use the index in <= comparison on timestamp ?\n\nBest regards,\nEvgeni Vasilev\nJAR Computers\nIT Department\njabber id: [email protected]\n\n\nI have the following table:\nCREATE TABLE \"temp\".tmp_135528\n(\n id integer NOT NULL,\n prid integer,\n group_id integer,\n iinv integer,\n oinv integer,\n isum numeric,\n osum numeric,\n idate timestamp without time zone,\n odate timestamp without time zone,\n CONSTRAINT t_135528_pk PRIMARY KEY (id)\n)\nWITH (OIDS=FALSE);\nWith index:\nCREATE INDEX t_135528\n ON \"temp\".tmp_135528\n USING btree\n (idate, group_id, osum, oinv);\nWhen the following query is executed the index is not used:\nEXPLAIN SELECT id, osum\nFROM temp.tmp_135528\nWHERE idate <= '2007-05-17 00:00:00'::timestamp\nAND group_id = '13'\nAND osum <= '19654.45328'\nAND oinv = -1\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tmp_135528 (cost=0.00..7022.36 rows=1166 width=11)\n Filter: ((idate <= '2007-05-17 00:00:00'::timestamp without time zone) AND (osum <= 19654.45328) AND (group_id = 13) AND (oinv = (-1)))\n(2 rows)\nWhen \n\"idate <= '2007-05-17 00:00:00'::timestamp\" \nis changed to \n\"idate >= '2007-05-17 00:00:00'::timestamp\" \nor\n\"idate = '2007-05-17 00:00:00'::timestamp\" \nthen the index is used:\nEXPLAIN SELECT id, osum\nFROM temp.tmp_135528\nWHERE idate >= '2007-05-17 00:00:00'::timestamp\nAND group_id = '13'\nAND osum <= '19654.45328'\nAND oinv = -1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_135528 on tmp_135528 (cost=0.00..462.61 rows=47 width=11)\n Index Cond: ((idate >= '2007-05-17 00:00:00'::timestamp without time zone) AND (group_id = 13) AND (osum <= 19654.45328) AND (oinv = (-1)))\n(2 rows)\nWhy I cannot use the index in <= comparison on timestamp ?\nBest regards,\nEvgeni Vasilev\nJAR Computers\nIT Department\njabber id: [email protected]", "msg_date": "Tue, 12 May 2009 12:00:08 +0300", "msg_from": "=?utf-8?b?0JXQstCz0LXQvdC40Lkg0JLQsNGB0LjQu9C10LI=?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Timestamp index not used in some cases" }, { "msg_contents": "On Tue, May 12, 2009 at 3:00 AM, Евгений Василев\n<[email protected]> wrote:\n> I have the following table:\n>\n> CREATE TABLE \"temp\".tmp_135528\n> (\n> id integer NOT NULL,\n> prid integer,\n> group_id integer,\n> iinv integer,\n> oinv integer,\n> isum numeric,\n> osum numeric,\n> idate timestamp without time zone,\n> odate timestamp without time zone,\n> CONSTRAINT t_135528_pk PRIMARY KEY (id)\n> )\n> WITH (OIDS=FALSE);\n>\n> With index:\n>\n> CREATE INDEX t_135528\n> ON \"temp\".tmp_135528\n> USING btree\n> (idate, group_id, osum, oinv);\n>\n> When the following query is executed the index is not used:\n>\n> EXPLAIN SELECT id, osum\n> FROM temp.tmp_135528\n> WHERE idate <= '2007-05-17 00:00:00'::timestamp\n> AND group_id = '13'\n> AND osum <= '19654.45328'\n> AND oinv = -1\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on tmp_135528 (cost=0.00..7022.36 rows=1166 width=11)\n> Filter: ((idate <= '2007-05-17 00:00:00'::timestamp without time zone) AND\n> (osum <= 19654.45328) AND (group_id = 13) AND (oinv = (-1)))\n> (2 rows)\n>\n> When\n> \"idate <= '2007-05-17 00:00:00'::timestamp\"\n> is changed to\n> \"idate >= '2007-05-17 00:00:00'::timestamp\"\n> or\n> \"idate = '2007-05-17 00:00:00'::timestamp\"\n> then the index is used:\n>\n> EXPLAIN SELECT id, osum\n> FROM temp.tmp_135528\n> WHERE idate >= '2007-05-17 00:00:00'::timestamp\n> AND group_id = '13'\n> AND osum <= '19654.45328'\n> AND oinv = -1;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using t_135528 on tmp_135528 (cost=0.00..462.61 rows=47 width=11)\n> Index Cond: ((idate >= '2007-05-17 00:00:00'::timestamp without time zone)\n> AND (group_id = 13) AND (osum <= 19654.45328) AND (oinv = (-1)))\n> (2 rows)\n>\n> Why I cannot use the index in <= comparison on timestamp ?\n\nYou can. But in this instance one query is returning 47 rows while\nthe other is returning 1166 rows (or the planner thinks it is).\nThere's a switchover point where it's cheaper to seq scan. You can\nadjust this point up and down by adjusting various costs parameters.\nrandom_page_cost is commonly lowered to the 1.5 to 2.0 range, and\neffective_cache_size is normally set higher, to match the cache in the\nkernel plus the shared_buffer size.\n", "msg_date": "Tue, 12 May 2009 03:55:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Timestamp index not used in some cases" }, { "msg_contents": "On Tuesday 12 May 2009 12:55:14 Scott Marlowe wrote:\n> On Tue, May 12, 2009 at 3:00 AM, Евгений Василев\n>\n> <[email protected]> wrote:\n> > I have the following table:\n> >\n> > CREATE TABLE \"temp\".tmp_135528\n> > (\n> > id integer NOT NULL,\n> > prid integer,\n> > group_id integer,\n> > iinv integer,\n> > oinv integer,\n> > isum numeric,\n> > osum numeric,\n> > idate timestamp without time zone,\n> > odate timestamp without time zone,\n> > CONSTRAINT t_135528_pk PRIMARY KEY (id)\n> > )\n> > WITH (OIDS=FALSE);\n> >\n> > With index:\n> >\n> > CREATE INDEX t_135528\n> > ON \"temp\".tmp_135528\n> > USING btree\n> > (idate, group_id, osum, oinv);\n> >\n> > When the following query is executed the index is not used:\n> >\n> > EXPLAIN SELECT id, osum\n> > FROM temp.tmp_135528\n> > WHERE idate <= '2007-05-17 00:00:00'::timestamp\n> > AND group_id = '13'\n> > AND osum <= '19654.45328'\n> > AND oinv = -1\n> >\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> >------------------------------------------------------------------ Seq\n> > Scan on tmp_135528 (cost=0.00..7022.36 rows=1166 width=11)\n> > Filter: ((idate <= '2007-05-17 00:00:00'::timestamp without time zone)\n> > AND (osum <= 19654.45328) AND (group_id = 13) AND (oinv = (-1)))\n> > (2 rows)\n> >\n> > When\n> > \"idate <= '2007-05-17 00:00:00'::timestamp\"\n> > is changed to\n> > \"idate >= '2007-05-17 00:00:00'::timestamp\"\n> > or\n> > \"idate = '2007-05-17 00:00:00'::timestamp\"\n> > then the index is used:\n> >\n> > EXPLAIN SELECT id, osum\n> > FROM temp.tmp_135528\n> > WHERE idate >= '2007-05-17 00:00:00'::timestamp\n> > AND group_id = '13'\n> > AND osum <= '19654.45328'\n> > AND oinv = -1;\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> >----------------------------------------------------------------------\n> > Index Scan using t_135528 on tmp_135528 (cost=0.00..462.61 rows=47\n> > width=11) Index Cond: ((idate >= '2007-05-17 00:00:00'::timestamp without\n> > time zone) AND (group_id = 13) AND (osum <= 19654.45328) AND (oinv =\n> > (-1))) (2 rows)\n> >\n> > Why I cannot use the index in <= comparison on timestamp ?\n>\n> You can. But in this instance one query is returning 47 rows while\n> the other is returning 1166 rows (or the planner thinks it is).\n> There's a switchover point where it's cheaper to seq scan. You can\n> adjust this point up and down by adjusting various costs parameters.\n> random_page_cost is commonly lowered to the 1.5 to 2.0 range, and\n> effective_cache_size is normally set higher, to match the cache in the\n> kernel plus the shared_buffer size.\n\nThank you this worked like a charm.\n", "msg_date": "Wed, 13 May 2009 13:02:10 +0300", "msg_from": "=?koi8-r?b?5dfHxc7JyiD3wdPJzMXX?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Timestamp index not used in some cases" }, { "msg_contents": "Hi all,\nThe query below is fairly fast if the commented sub-select is\ncommented, but once I included that column, it takes over 10 minutes to\nreturn results. Can someone shed some light on it? I was able to redo\nthe query using left joins instead, and it only marginally increased\nresult time. This is an application (Quasar by Linux Canada) I can't\nchange the query in, so want to see if there's a way to tune the\ndatabase for it to perform faster. Application developer says that\nSybase is able to run this same query with the price column included\nwith only marginal increase in time.\n\n\nselect item.item_id,item_plu.number,item.description,\n(select dept.name from dept where dept.dept_id = item.dept_id)\n-- ,(select price from item_price\n-- where item_price.item_id = item.item_id\n-- and item_price.zone_id = 'OUsEaRcAA3jQrg42WHUm8A'\n-- and item_price.price_type = 0\n-- and item_price.size_name = item.sell_size)\nfrom item join item_plu on item.item_id = item_plu.item_id and\nitem_plu.seq_num = 0\nwhere item.inactive_on is null and exists (select item_num.number from\nitem_num\nwhere item_num.item_id = item.item_id)\nand exists (select stocked from item_store where stocked = 'Y'\nand item_store.item_id = item.item_id)\n\n\nExplain analyze without price column:\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=1563.82..13922.00 rows=10659 width=102) (actual\ntime=165.988..386.737 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_store.item_id)\n -> Hash Join (cost=1164.70..2530.78 rows=10659 width=148) (actual\ntime=129.804..222.008 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_plu.item_id)\n -> Hash Join (cost=626.65..1792.86 rows=10661 width=93)\n(actual time=92.930..149.267 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_num.item_id)\n -> Seq Scan on item (cost=0.00..882.67 rows=10665\nwidth=70) (actual time=0.006..17.706 rows=10669 loops=1)\n Filter: (inactive_on IS NULL)\n -> Hash (cost=493.39..493.39 rows=10661 width=23)\n(actual time=92.872..92.872 rows=10672 loops=1)\n -> HashAggregate (cost=386.78..493.39 rows=10661\nwidth=23) (actual time=59.193..75.303 rows=10672 loops=1)\n -> Seq Scan on item_num (cost=0.00..339.22\nrows=19022 width=23) (actual time=0.007..26.013 rows=19040 loops=1)\n -> Hash (cost=404.76..404.76 rows=10663 width=55) (actual\ntime=36.835..36.835 rows=10672 loops=1)\n -> Seq Scan on item_plu (cost=0.00..404.76 rows=10663\nwidth=55) (actual time=0.010..18.609 rows=10672 loops=1)\n Filter: (seq_num = 0)\n -> Hash (cost=265.56..265.56 rows=10685 width=23) (actual\ntime=36.123..36.123 rows=10672 loops=1)\n -> Seq Scan on item_store (cost=0.00..265.56 rows=10685\nwidth=23) (actual time=0.015..17.959 rows=10672 loops=1)\n Filter: (stocked = 'Y'::bpchar)\n SubPlan 1\n -> Seq Scan on dept (cost=0.00..1.01 rows=1 width=32) (actual\ntime=0.002..0.004 rows=1 loops=10669)\n Filter: (dept_id = $0)\n Total runtime: 401.560 ms\n(21 rows)\n\n\nExplain with price column:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=1563.82..4525876.70 rows=10659 width=106) (actual\ntime=171.186..20863.887 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_store.item_id)\n -> Hash Join (cost=1164.70..2530.78 rows=10659 width=152) (actual\ntime=130.025..236.528 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_plu.item_id)\n -> Hash Join (cost=626.65..1792.86 rows=10661 width=97)\n(actual time=92.780..158.514 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_num.item_id)\n -> Seq Scan on item (cost=0.00..882.67 rows=10665\nwidth=74) (actual time=0.008..18.836 rows=10669 loops=1)\n Filter: (inactive_on IS NULL)\n -> Hash (cost=493.39..493.39 rows=10661 width=23)\n(actual time=92.727..92.727 rows=10672 loops=1)\n -> HashAggregate (cost=386.78..493.39 rows=10661\nwidth=23) (actual time=59.064..75.243 rows=10672 loops=1)\n -> Seq Scan on item_num (cost=0.00..339.22\nrows=19022 width=23) (actual time=0.009..26.287 rows=19040 loops=1)\n -> Hash (cost=404.76..404.76 rows=10663 width=55) (actual\ntime=37.206..37.206 rows=10672 loops=1)\n -> Seq Scan on item_plu (cost=0.00..404.76 rows=10663\nwidth=55) (actual time=0.011..18.823 rows=10672 loops=1)\n Filter: (seq_num = 0)\n -> Hash (cost=265.56..265.56 rows=10685 width=23) (actual\ntime=36.395..36.395 rows=10672 loops=1)\n -> Seq Scan on item_store (cost=0.00..265.56 rows=10685\nwidth=23) (actual time=0.015..18.120 rows=10672 loops=1)\n Filter: (stocked = 'Y'::bpchar)\n SubPlan 1\n -> Seq Scan on dept (cost=0.00..1.01 rows=1 width=32) (actual\ntime=0.002..0.004 rows=1 loops=10669)\n Filter: (dept_id = $0)\n SubPlan 2\n -> Seq Scan on item_price (cost=0.00..423.30 rows=1 width=8)\n(actual time=1.914..1.914 rows=0 loops=10669)\n Filter: ((item_id = $1) AND (zone_id =\n'OUsEaRcAA3jQrg42WHUm8A'::bpchar) AND (price_type = 0) AND\n((size_name)::text = ($2)::text))\n Total runtime: 20879.388 ms\n(24 rows)\n\n\n", "msg_date": "Sat, 21 Nov 2009 22:41:23 -0600", "msg_from": "Mark Dueck <[email protected]>", "msg_from_op": false, "msg_subject": "sub-select makes query take too long - unusable" } ]
[ { "msg_contents": "Anyone on the list had a chance to benchmark the Nehalem's yet? I'm\nprimarily wondering if their promise of performance from 3 memory\nchannels holds up under typical pgsql workloads. I've been really\nhappy with the behavior of my AMD shanghai based server under heavy\nloads, but if the Nehalems much touted performance increase translates\nto pgsql, I'd like to know.\n", "msg_date": "Tue, 12 May 2009 12:47:21 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "Anand did SQL Server and Oracle test results, the Nehalem system looks \nlike a substantial improvement over the Shanghai Opteron 2384:\n\nhttp://it.anandtech.com/IT/showdoc.aspx?i=3536&p=6\nhttp://it.anandtech.com/IT/showdoc.aspx?i=3536&p=7\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 12 May 2009 22:05:58 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "On Tue, May 12, 2009 at 8:05 PM, Greg Smith <[email protected]> wrote:\n> Anand did SQL Server and Oracle test results, the Nehalem system looks like\n> a substantial improvement over the Shanghai Opteron 2384:\n>\n> http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=6\n> http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=7\n\nThat's an interesting article. Thanks for the link. A couple points\nstick out to me.\n\n1: 5520 to 5540 parts only have 1 133MHz step increase in performance\n2: 550x parts have no hyperthreading.\n\nAssuming that the parts tested (5570) were using hyperthreading and\ntwo 133MHz steps, at the lower end of the range, the 550x parts are\nlikely not that much faster than the opterons in their same clock\nspeed range, but are still quite a bit more expensive.\n\nIt'd be nice to see some benchmarks on the more reasonably priced CPUs\nin both ranges, the 2.2 to 2.4 GHz opterons and the 2.0 (5504) to\n2.26GHz (5520) nehalems. Since I have to buy > 1 server to handle the\nload and provide redundancy anyway, single cpu performance isn't\nnearly as interesting as aggregate performance / $ spent.\n\nWhile all the benchmarks on near 3GHz parts is fun to read and\nsalivate over, it's not as relevant to my interests as the performance\nof the more reasonably prices parts.\n", "msg_date": "Tue, 12 May 2009 20:28:39 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "The $ cost of more CPU power on larger machines ends up such a small %\nchunk, especially after I/O cost. Sure, the CPU with HyperThreading and the\nturbo might be 40% more expensive than the other CPU, but if the total\nsystem cost is 5% more for 15% more performance . . .\n\nIt depends on how CPU limited you are. If you aren't, there isn't much of a\nreason to look past the cheaper Opterons with a good I/O setup.\n\nI've got a 2 x 5520 system with lots of RAM on the way. The problem with\nlots of RAM in the Nehalem systems, is that the memory speed slows as more\nis added. I think mine slows from the 1066Mhz the processor can handle to\n800Mhz. It still has way more bandwidth than the old Xeons though.\nAlthough my use case is about as far from pg_bench as you can get, I might\nbe able to get a run of it in during stress testing.\n\n\n\nOn 5/12/09 7:28 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> On Tue, May 12, 2009 at 8:05 PM, Greg Smith <[email protected]> wrote:\n>> Anand did SQL Server and Oracle test results, the Nehalem system looks like\n>> a substantial improvement over the Shanghai Opteron 2384:\n>> \n>> http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=6\n>> http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=7\n> \n> That's an interesting article. Thanks for the link. A couple points\n> stick out to me.\n> \n> 1: 5520 to 5540 parts only have 1 133MHz step increase in performance\n> 2: 550x parts have no hyperthreading.\n> \n> Assuming that the parts tested (5570) were using hyperthreading and\n> two 133MHz steps, at the lower end of the range, the 550x parts are\n> likely not that much faster than the opterons in their same clock\n> speed range, but are still quite a bit more expensive.\n> \n> It'd be nice to see some benchmarks on the more reasonably priced CPUs\n> in both ranges, the 2.2 to 2.4 GHz opterons and the 2.0 (5504) to\n> 2.26GHz (5520) nehalems. Since I have to buy > 1 server to handle the\n> load and provide redundancy anyway, single cpu performance isn't\n> nearly as interesting as aggregate performance / $ spent.\n> \n> While all the benchmarks on near 3GHz parts is fun to read and\n> salivate over, it's not as relevant to my interests as the performance\n> of the more reasonably prices parts.\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 12 May 2009 19:59:28 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "On Tue, May 12, 2009 at 8:59 PM, Scott Carey <[email protected]> wrote:\n> The $ cost of more CPU power on larger machines ends up such a small %\n> chunk, especially after I/O cost.  Sure, the CPU with HyperThreading and the\n> turbo might be 40% more expensive than the other CPU, but if the total\n> system cost is 5% more for 15% more performance . . .\n\nBut everything dollar I spend on CPUs is a dollar I can't spend on\nRAID contollers, more memory, or more drives.\n\nWe're looking at machines with say 32 1TB SATA drives, which run in\nthe $12k range. The Nehalem 5570s (2.8GHz) are going for something in\nthe range of $1500 or more, the 5540 (2.53GHz) at $774.99, 5520\n(2.26GHz) at $384.99, and the 5506 (2.13GHz) at $274.99. The 5520 is\nthe first one with hyperthreading so it's a reasonable cost increase.\nSomewhere around the 5530 the cost for increase in performance stops\nmaking a lot of sense.\n\nThe opterons, like the 2378 barcelona at 2.4GHz cost $279.99, or the\n2.5GHz 2380 at $400 are good values. And I know they mostly scale by\nclock speed so I can decide on which to buy based on that. The 83xx\nseries cpus are still far too expensive to be cost effective, with\n2.2GHz parts running $600 and faster parts climbing VERY quickly after\nthat.\n\nSo what I want to know is how the 2.5GHz barcelonas would compare to\nboth the 5506 through 5530 nehalems, as those parts are all in the\nsame cost range (sub $500 cpus).\n\n> It depends on how CPU limited you are.  If you aren't, there isn't much of a\n> reason to look past the cheaper Opterons with a good I/O setup.\n\nExactly. Which is why I'm looking for best bang for buck on the CPU\nfront. Also performance as a \"data pump\" so to speak, i.e. minimizing\nmemory bandwidth limitations.\n\n> I've got a 2 x 5520 system with lots of RAM on the way.  The problem with\n> lots of RAM in the Nehalem systems, is that the memory speed slows as more\n> is added.\n\nI too wondered about that and its effect on performance. Another\nbenchmark I'd like to see, how it runs with more and less memory.\n\n> I think mine slows from the 1066Mhz the processor can handle to\n> 800Mhz.  It still has way more bandwidth than the old Xeons though.\n> Although my use case is about as far from pg_bench as you can get, I might\n> be able to get a run of it in during stress testing.\n\nI'd be very interested in hearing how it runs. and not just for pgbench.\n", "msg_date": "Tue, 12 May 2009 21:36:45 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "Just realized I made a mistake, I was under the impression that\nShanghai CPUs had 8xxx numbers while barcelona had 23xx numbers. I\nwas wrong, it appears the 8xxx numbers are for 4+ socket servers while\nthe 23xx numbers are for 2 or fewer sockets. So, there are several\nquite affordable shanghai cpus out there, and many of the ones I\nquoted as barcelonas are in fact shanghais with the larger 6M L2\ncache.\n", "msg_date": "Tue, 12 May 2009 23:06:21 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "We have a dual E5540 with 16GB (I think 1066Mhz) memory here, but no AMD \nShanghai. We haven't done PostgreSQL benchmarks yet, but given the \nprevious experiences, PostgreSQL should be equally faster compared to mysql.\n\nOur databasebenchmark is actually mostly a cpu/memory-benchmark. \nComparing the results of the dual E5540 (2.53Ghz with HT enabled) to a \ndual Intel X5355 (2.6Ghz quad core two from 2007), the peek load has \nincreased from somewhere between 7 and 10 concurrent clients to \nsomewhere around 25, suggesting better scalable hardware. With the 25 \nconcurrent clients we handled 2.5 times the amount of queries/second \ncompared to the 7 concurrent client-score for the X5355, both in MySQL \n5.0.41. At 7 CC we still had 1.7 times the previous result.\n\nI'm not really sure how the shanghai cpu's compare to those older \nX5355's, the AMD's should be faster, but how much?\n\nI've no idea if we get a Shanghai to compare it with, but we will get a \ndual X5570 soon on which we'll repeat some of the tests, so that should \nat least help a bit with scaling the X5570-results around the world down.\n\nBest regards,\n\nArjen\n\nOn 12-5-2009 20:47 Scott Marlowe wrote:\n> Anyone on the list had a chance to benchmark the Nehalem's yet? I'm\n> primarily wondering if their promise of performance from 3 memory\n> channels holds up under typical pgsql workloads. I've been really\n> happy with the behavior of my AMD shanghai based server under heavy\n> loads, but if the Nehalems much touted performance increase translates\n> to pgsql, I'd like to know.\n> \n", "msg_date": "Wed, 13 May 2009 08:08:24 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "\n\nOn 5/12/09 10:06 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> Just realized I made a mistake, I was under the impression that\n> Shanghai CPUs had 8xxx numbers while barcelona had 23xx numbers. I\n> was wrong, it appears the 8xxx numbers are for 4+ socket servers while\n> the 23xx numbers are for 2 or fewer sockets. So, there are several\n> quite affordable shanghai cpus out there, and many of the ones I\n> quoted as barcelonas are in fact shanghais with the larger 6M L2\n> cache.\n> \n\nAt this point, I wouldn¹t go below 5520 on the Nehalem side (turbo + HT is\njust too big a jump, as is the 1066Mhz versus 800Mhz memory jump). Its $100\nextra per CPU on a $10K + machine.\nThe next 'step' is the 5550, since it can run 1333Mhz memory and has 2x the\nturbo -- but you would have to be more CPU bound for that. I wouldn't worry\nabout the 5530 or 5540, they will only scale a little up from the 5520.\n\nFor Opterons, I wouldn't touch anything but a Shanghai these days since its\njust not much more and we know the cache differences are very important for\nDB loads.\n\n", "msg_date": "Wed, 13 May 2009 09:58:41 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "\nOn 5/12/09 11:08 PM, \"Arjen van der Meijden\" <[email protected]>\nwrote:\n\n> We have a dual E5540 with 16GB (I think 1066Mhz) memory here, but no AMD\n> Shanghai. We haven't done PostgreSQL benchmarks yet, but given the\n> previous experiences, PostgreSQL should be equally faster compared to mysql.\n> \n> Our databasebenchmark is actually mostly a cpu/memory-benchmark.\n> Comparing the results of the dual E5540 (2.53Ghz with HT enabled) to a\n> dual Intel X5355 (2.6Ghz quad core two from 2007), the peek load has\n> increased from somewhere between 7 and 10 concurrent clients to\n> somewhere around 25, suggesting better scalable hardware. With the 25\n> concurrent clients we handled 2.5 times the amount of queries/second\n> compared to the 7 concurrent client-score for the X5355, both in MySQL\n> 5.0.41. At 7 CC we still had 1.7 times the previous result.\n> \n\nExcellent! That is a pretty huge boost. I'm curious which aspects of this\nnew architecture helped the most. For Postgres, the following would seem\nthe most relevant:\n1. Shared L3 cache per processors -- more efficient shared datastructure\naccess.\n2. Faster atomic operations -- CompareAndSwap, etc are much faster.\n3. Faster cache coherency.\n4. Lower latency RAM with more overall bandwidth (Opteron style).\n\nCan you do a quick and dirty memory bandwidth test? (assuming linux)\nOn the older X5355 machine and the newer E5540, try:\n/sbin/hdparm -T /dev/sd<device>\n\nWhere <device> is a valid letter for a device on your system.\n\nHere are the results for me on an older system with dual Intel E5335 (2Ghz,\n4MB cache, family 6 model 15)\nBest result out of 5 (its not all that consistent, + or minus 10%)\n/dev/sda:\n Timing cached reads: 10816 MB in 2.00 seconds = 5416.89 MB/sec\n\nAnd a newer system with dual Xeon X5460 (3.16Ghz, 6MB cache, family 6 model\n23)\nBest of 7 results:\n/dev/sdb:\n Timing cached reads: 26252 MB in 1.99 seconds = 13174.42 MB/sec\n\nIts not a very accurate measurement, but its quick and highlights relative\nhardware differences very easily.\n\n\n> I'm not really sure how the shanghai cpu's compare to those older\n> X5355's, the AMD's should be faster, but how much?\n> \n\nI'm not sure either, and the Xeon platforms have evolved such that the\nchipsets and RAM configurations matter as much as the processor does.\n\n> I've no idea if we get a Shanghai to compare it with, but we will get a\n> dual X5570 soon on which we'll repeat some of the tests, so that should\n> at least help a bit with scaling the X5570-results around the world down.\n> \n> Best regards,\n> \n> Arjen\n> \n\n", "msg_date": "Wed, 13 May 2009 11:39:29 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "FYI:\nThis is an excellent article on the Nehalem CPU's and their memory\nperformance as the CPU and RAM combinations change:\n\nhttp://blogs.sun.com/jnerl/entry/configuring_and_optimizing_intel_xeon\n\nIts fairly complicated (as it is for the Opteron too).\n\n\nOn 5/13/09 9:58 AM, \"Scott Carey\" <[email protected]> wrote:\n\n> \n> \n> \n> On 5/12/09 10:06 PM, \"Scott Marlowe\" <[email protected]> wrote:\n> \n>> Just realized I made a mistake, I was under the impression that\n>> Shanghai CPUs had 8xxx numbers while barcelona had 23xx numbers. I\n>> was wrong, it appears the 8xxx numbers are for 4+ socket servers while\n>> the 23xx numbers are for 2 or fewer sockets. So, there are several\n>> quite affordable shanghai cpus out there, and many of the ones I\n>> quoted as barcelonas are in fact shanghais with the larger 6M L2\n>> cache.\n>> \n> \n> At this point, I wouldn¹t go below 5520 on the Nehalem side (turbo + HT is\n> just too big a jump, as is the 1066Mhz versus 800Mhz memory jump). Its $100\n> extra per CPU on a $10K + machine.\n> The next 'step' is the 5550, since it can run 1333Mhz memory and has 2x the\n> turbo -- but you would have to be more CPU bound for that. I wouldn't worry\n> about the 5530 or 5540, they will only scale a little up from the 5520.\n> \n> For Opterons, I wouldn't touch anything but a Shanghai these days since its\n> just not much more and we know the cache differences are very important for\n> DB loads.\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 13 May 2009 11:59:00 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "On 13-5-2009 20:39 Scott Carey wrote:\n> Excellent! That is a pretty huge boost. I'm curious which aspects of this\n> new architecture helped the most. For Postgres, the following would seem\n> the most relevant:\n> 1. Shared L3 cache per processors -- more efficient shared datastructure\n> access.\n> 2. Faster atomic operations -- CompareAndSwap, etc are much faster.\n> 3. Faster cache coherency.\n> 4. Lower latency RAM with more overall bandwidth (Opteron style).\n\nApart from that, it has a newer debian (and thus kernel/glibc) and a \nslightly less constraining IO which may help as well.\n\n> Can you do a quick and dirty memory bandwidth test? (assuming linux)\n> On the older X5355 machine and the newer E5540, try:\n> /sbin/hdparm -T /dev/sd<device>\n\nIt is in use, so the results may not be so good, this is the best I got \non our dual X5355:\n Timing cached reads: 6314 MB in 2.00 seconds = 3159.08 MB/sec\n\nBut this is the best I got for a (also in use) Dual E5450 we have:\n Timing cached reads: 13158 MB in 2.00 seconds = 6587.11 MB/sec\n\nAnd here the best for the (idle) E5540:\n Timing cached reads: 16494 MB in 2.00 seconds = 8256.27 MB/sec\n\nThese numbers are with hdparm v8.9\n\nBest regards,\n\nArjen\n", "msg_date": "Thu, 14 May 2009 08:21:38 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "On Wed, 13 May 2009, Scott Carey wrote:\n\n> Can you do a quick and dirty memory bandwidth test? (assuming linux)\n>\n> /sbin/hdparm -T /dev/sd<device>\n>\n> ...its not a very accurate measurement, but its quick and highlights \n> relative hardware differences very easily.\n\nI've found \"hdparm -T\" to be useful for comparing the relative memory \nbandwidth of a given system as I change its RAM configuration around, but \nthat's about it. I've seen that result change by a factor of 2X just by \nchanging kernel version on the same hardware. The data volume transferred \ndoesn't seem to be nearly enough to extract the true RAM speed from \n(guessing the cause here) things like whether the test/kernel code fits \ninto the CPU cache.\n\nI'm using this nowadays:\n\nsysbench --test=memory --memory-oper=write --memory-block-size=1024MB \n--memory-total-size=1024MB run\n\nThe sysbench read test looks similarly borked by caching effects when I've \ntried it, but if you write that much it seems to give useful results.\n\nP.S. Too many Scotts who write similarly on this thread. If either if you \nare at PGCon next week, please flag me down if you see me so I can finally \nsort you two out.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 14 May 2009 02:52:03 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "\nOn 5/13/09 11:52 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> On Wed, 13 May 2009, Scott Carey wrote:\n> \n>> Can you do a quick and dirty memory bandwidth test? (assuming linux)\n>> \n>> /sbin/hdparm -T /dev/sd<device>\n>> \n>> ...its not a very accurate measurement, but its quick and highlights\n>> relative hardware differences very easily.\n> \n> I've found \"hdparm -T\" to be useful for comparing the relative memory\n> bandwidth of a given system as I change its RAM configuration around, but\n> that's about it. I've seen that result change by a factor of 2X just by\n> changing kernel version on the same hardware. The data volume transferred\n> doesn't seem to be nearly enough to extract the true RAM speed from\n> (guessing the cause here) things like whether the test/kernel code fits\n> into the CPU cache.\n\nThat's too bad -- I have been using it to compare machines as well, but they\nare all on the same Linux version / distro.\n\nRegardless -- the results indicate a 2x to 3x bandwidth improvement... Which\nsounds about right if the older CPU isn't on the newer FBDIMM chipset. If\nboth of those machines are on the same Kernel, the relative values should be\na somewhat valid (though -- definitely not all that accurate).\n\n> \n> I'm using this nowadays:\n> \n> sysbench --test=memory --memory-oper=write --memory-block-size=1024MB\n> --memory-total-size=1024MB run\n> \n\nUnfortunately, sysbench isn't installed by default on many (most?) distros,\nor even available as a package on many. So its a bigger 'ask' to get\nresults from it. Certainly a significantly better overall tool.\n\n> The sysbench read test looks similarly borked by caching effects when I've\n> tried it, but if you write that much it seems to give useful results.\n\n", "msg_date": "Thu, 14 May 2009 10:01:02 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" }, { "msg_contents": "\nOn 5/13/09 11:21 PM, \"Arjen van der Meijden\" <[email protected]>\nwrote:\n\n> On 13-5-2009 20:39 Scott Carey wrote:\n>> Excellent! That is a pretty huge boost. I'm curious which aspects of this\n>> new architecture helped the most. For Postgres, the following would seem\n>> the most relevant:\n>> 1. Shared L3 cache per processors -- more efficient shared datastructure\n>> access.\n>> 2. Faster atomic operations -- CompareAndSwap, etc are much faster.\n>> 3. Faster cache coherency.\n>> 4. Lower latency RAM with more overall bandwidth (Opteron style).\n> \n> Apart from that, it has a newer debian (and thus kernel/glibc) and a\n> slightly less constraining IO which may help as well.\n> \n>> Can you do a quick and dirty memory bandwidth test? (assuming linux)\n>> On the older X5355 machine and the newer E5540, try:\n>> /sbin/hdparm -T /dev/sd<device>\n> \n> It is in use, so the results may not be so good, this is the best I got\n> on our dual X5355:\n> Timing cached reads: 6314 MB in 2.00 seconds = 3159.08 MB/sec\n> \n> But this is the best I got for a (also in use) Dual E5450 we have:\n> Timing cached reads: 13158 MB in 2.00 seconds = 6587.11 MB/sec\n> \n> And here the best for the (idle) E5540:\n> Timing cached reads: 16494 MB in 2.00 seconds = 8256.27 MB/sec\n> \n> These numbers are with hdparm v8.9\n\nThanks!\n\nMy numbers were with hdparm 6.6 (Centos 5.3) -- so they aren't directly\ncomparable. \nFYI When my systems are in use, the results are typically 50% to 75% of the\nidle scores.\n\nBut, yours probably are roughly comparable to each other -- you're getting\nmore than 2x the memory bandwidth between those systems. Without knowing\nthe exact chipset and RAM configurations, this is definitely a factor in the\nperformance difference at higher concurrency.\n\n\n> \n> Best regards,\n> \n> Arjen\n> \n\n", "msg_date": "Thu, 14 May 2009 10:10:06 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD Shanghai versus Intel Nehalem" } ]
[ { "msg_contents": "Hi\n\nhave the following table (theoretical)\n\ntable apartment_location (\n\n\tcity_id\t int,\n\tstreet_id int,\n\thouse_id int,\n\tfloor_id int,\n\towner\t string\n\t...\n)\n\nindex .. ( city_id, street_id, house_id, floor_id ) tablespc indexspace;\n\non a database with 260 GB of data and an index size of 109GB on separate \nraid disks. there are\n\t85 city_ids, 2000\n\tstreet_ids per city,\n\t20 house_ids per street per city\n\t5 floor_ids per house_ per street per city\n\nThen I perform a query to retrieve all house_ids for a specified city, \nhouse and floor ( a bit contrived, but the same cardinality applies)\n\n select street_id, floor_id from apartment_location where\n\tcity_id = 67 and\n\thouse_id = 6 and\n\tfloor_id = 4\n\nthis returns about 2000 rows, but the query takes 3-4 seconds. It \nperformas an index scan, and everything happens inside 6GB of memory.\n\nSo the question, any suggestions on how to possibly decrease the query \ntime. From iostat etc. its seems that most of the work is reading the \nindex, reading the data takes almost next to nothing.\n\nAny suggestions?\n\nregards\n\nthomas\n\n\n\n\n\n\t\n", "msg_date": "Tue, 12 May 2009 23:52:23 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "increase index performance" }, { "msg_contents": "On Tue, 12 May 2009, Thomas Finneid wrote:\n\n> on a database with 260 GB of data and an index size of 109GB on separate raid \n> disks. there are\n> \t85 city_ids, 2000\n> \tstreet_ids per city,\n> \t20 house_ids per street per city\n> \t5 floor_ids per house_ per street per city\n\nYou should test what happens if you reduce the index to just being \n(city_id,street_id). Having all the fields in there makes the index \nlarger, and it may end up being faster to just pull all of the ~100 data \nrows for a particular (city_id,street_id) using the smaller index and then \nfilter out just the ones you need. Having a smaller index to traverse \nalso means that you'll be more likely to keep all the index blocks in the \nbuffer cache moving forward.\n\nA second level improvement there is to then CLUSTER on the smaller index, \nwhich increases the odds you'll get all of the rows you need by fetching \nonly a small number of data pages.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 12 May 2009 21:38:19 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increase index performance" }, { "msg_contents": "\nFirst off, is there a way to pre-filter some of this data, by a view, \ntemporary table, partitioned indexes or something.\n\nSecondly, one of the problems seems to be the size of the data and its \nindex, how can I calulate how much space a particular part of the index \nneeds in memory? maybe I could rearrange things a bit better so it \nbetter first inside pages and so on.\n\nThirdly I was a bit unclear and this was the best example I could think \nof (my client probably dont want me to talk about this at all... hence \nthe contrived example):\n\n 85 city_ids,\n 2000 street_ids per city,\n 10 house_ids per street\n 500 floor_ids per house\n\nNow it should have the correct data distribution and the correct \ncardinality.\n\nIn this particular query I am interested in all streets in a city that \nhave the specific house id and the specific floor id.\n\nBy specifying\n\tcity_id, house_id and floor_id\n\nI should get all street_ids that matches\n\nThe example you gave Greg assumed I wanted to follow cardinality, but I \nneed to skip the second field in order to get the right query. So \npulling the data based on the two first fields, City and Street would \njust give me data for a single street, when I want it for all streets.\n\n\n\n\t\n\n\n\n\n\nGreg Smith wrote:\n> On Tue, 12 May 2009, Thomas Finneid wrote:\n> \n>> on a database with 260 GB of data and an index size of 109GB on \n>> separate raid disks. there are\n>> 85 city_ids, 2000\n>> street_ids per city,\n>> 20 house_ids per street per city\n>> 5 floor_ids per house_ per street per city\n> \n> You should test what happens if you reduce the index to just being \n> (city_id,street_id). Having all the fields in there makes the index \n> larger, and it may end up being faster to just pull all of the ~100 data \n> rows for a particular (city_id,street_id) using the smaller index and \n> then filter out just the ones you need. Having a smaller index to \n> traverse also means that you'll be more likely to keep all the index \n> blocks in the buffer cache moving forward.\n> \n> A second level improvement there is to then CLUSTER on the smaller \n> index, which increases the odds you'll get all of the rows you need by \n> fetching only a small number of data pages.\n> \n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n", "msg_date": "Wed, 13 May 2009 09:25:18 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: increase index performance" }, { "msg_contents": "On Tue, 12 May 2009, Greg Smith wrote:\n> You should test what happens if you reduce the index to just being \n> (city_id,street_id).\n\nI think you're missing the point a little here. The point is that Thomas \nis creating an index on (city_id, street_id, house_id, floor_id) and \nrunning a query on (city_id, house_id, floor_id).\n\nThomas, the order of columns in the index matters. The index is basically \na tree structure, which resolves the left-most column before resolving the \ncolumn to the right of it. So to answer your query, it will resolve the \ncity_id, then it will have to scan almost all of the tree under that, \nbecause you are not constraining for street_id. A much better index to \nanswer your query is (city_id, house_id, floor_id) - then it can just look \nup straight away. Instead of the index returning 200000 rows to check, it \nwill return just the 2000.\n\nMatthew\n\n-- \n An ant doesn't have a lot of processing power available to it. I'm not trying\n to be speciesist - I wouldn't want to detract you from such a wonderful\n creature, but, well, there isn't a lot there, is there?\n -- Computer Science Lecturer\n", "msg_date": "Wed, 13 May 2009 11:53:27 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increase index performance" }, { "msg_contents": "Matthew Wakeling wrote:\n> Thomas, the order of columns in the index matters. The index is \n> basically a tree structure, which resolves the left-most column before \n> resolving the column to the right of it. So to answer your query, it \n> will resolve the city_id, then it will have to scan almost all of the \n> tree under that, because you are not constraining for street_id. A much \n> better index to answer your query is (city_id, house_id, floor_id) - \n> then it can just look up straight away. Instead of the index returning \n> 200000 rows to check, it will return just the 2000.\n\nThats something I was a bit unsure about, because of the cardinality of \nthe data. But thanks, I will try it. Just need to populate a new data \nbase with the new index. (Apparently, creating a new index on an already \nexisting database is slower than just recreating the db, when the db is \n250GB big)\n\n\nthomas\n", "msg_date": "Wed, 13 May 2009 21:45:28 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "Re: increase index performance" }, { "msg_contents": "\n\n-----Original Message-----\nFrom: [email protected] [mailto:pgsql-performance- A\nmuch \n>> better index to answer your query is (city_id, house_id, floor_id) - \n>> then it can just look up straight away. Instead of the index returning \n>> 200000 rows to check, it will return just the 2000.\n\nShouldn't BITMAP indexes come into play?\n\nDoes having one index w/ 3 parameters being better than 3 index w/ 3\ndifferent parameters be better for BITMAP index seeks?\n\n\n", "msg_date": "Thu, 14 May 2009 14:27:09 +0800", "msg_from": "\"Ow Mun Heng\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increase index performance" }, { "msg_contents": "On Thu, 14 May 2009, Ow Mun Heng wrote:\n> Shouldn't BITMAP indexes come into play?\n>\n> Does having one index w/ 3 parameters being better than 3 index w/ 3\n> different parameters be better for BITMAP index seeks?\n\nI'll let someone correct me if I'm wrong, but using a single index that \nexactly covers your search is always going to be better than munging \ntogether results from several indexes, even if the planner decides to turn \nit into a bitmap index scan (which will be more likely in PG8.4 with \neffective_concurrency set).\n\nMatthew\n\n-- \n I don't want the truth. I want something I can tell parliament!\n -- Rt. Hon. Jim Hacker MP\n", "msg_date": "Thu, 14 May 2009 11:20:33 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increase index performance" }, { "msg_contents": "\n\n-----Original Message-----\nFrom: Matthew Wakeling [mailto:[email protected]] \nOn Thu, 14 May 2009, Ow Mun Heng wrote:\n>> Shouldn't BITMAP indexes come into play?\n>>\n>> Does having one index w/ 3 parameters being better than 3 index w/ 3\n>> different parameters be better for BITMAP index seeks?\n\n>I'll let someone correct me if I'm wrong, but using a single index that \n>exactly covers your search is always going to be better than munging \n>together results from several indexes, even if the planner decides to turn \n>it into a bitmap index scan (which will be more likely in PG8.4 with \n>effective_concurrency set).\n\nI don't dis-agree there, but for multiple field indexes, the sequence has to\nbe followed. If queries hit those exact sequence, then we're good to go, if\nnot, then it's wasted and not used to it's full capability.\n\nEffective_concurrency you say.. Hmm... gotta goggle that\n\n", "msg_date": "Thu, 14 May 2009 19:54:20 +0800", "msg_from": "\"Ow Mun Heng\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increase index performance" } ]
[ { "msg_contents": "Dear List mates,\n\nmore optimal plan... \nmorreoptimal configuration...\n\nwe suffer a 'more optimal' superlative missuse\n\nthere is not so 'more optimal' thing but a simple 'better' thing.\n\nim not native english speaker but i think it still applies.\n\nWell this a superlative list so all of you deserve a better \"optimal\" use.\n\nRegards, Angel\n-- \nEste correo no tiene dibujos. Las formas extrañas en la pantalla son letras.\n->>-----------------------------------------------\n Clist UAH a.k.a Angel\n---------------------------------[www.uah.es]-<<--\n\n...being the second biggest search engine in the world is good enough for us. Peter @ Pirate Bay.\n", "msg_date": "Tue, 12 May 2009 23:53:27 +0200", "msg_from": "Angel Alvarez <[email protected]>", "msg_from_op": true, "msg_subject": "superlative missuse" }, { "msg_contents": "On Tue, May 12, 2009 at 5:53 PM, Angel Alvarez <[email protected]> wrote:\n\n> we suffer a 'more optimal' superlative missuse\n>\n> there is  not so 'more optimal' thing but a simple 'better' thing.\n>\n> im not native english speaker but i think it still applies.\n>\n> Well this a superlative list so all of you deserve a better \"optimal\" use.\n\nAs a native english speaker:\n\nYou are technically correct. However, \"more optimal\" has a\nwell-understood meaning as \"closer to optimal\", and as such is\nappropriate and generally acceptable despite being technically\nincorrect.\n\nThis is a postgres mailing list, not an english grammar mailing list...\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Tue, 12 May 2009 20:12:41 -0400", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: superlative missuse" }, { "msg_contents": "[email protected] (Angel Alvarez) writes:\n> more optimal plan... \n> morreoptimal configuration...\n>\n> we suffer a 'more optimal' superlative missuse\n>\n> there is not so 'more optimal' thing but a simple 'better' thing.\n>\n> im not native english speaker but i think it still applies.\n\nIf I wanted to be pedantic about it, I'd say that the word \"nearly\" is\nmissing.\n\nThat is, it would be \"strictly correct\" if one instead said \"more\nnearly optimal.\"\n\nI don't imagine people get too terribly confused by the lack of the\nword \"nearly,\" so I nearly don't care :-).\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://linuxfinances.info/info/languages.html\n\"Bureaucracies interpret communication as damage and route around it\"\n-- Jamie Zawinski\n", "msg_date": "Wed, 13 May 2009 11:36:45 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: superlative missuse" }, { "msg_contents": "Chris Browne <[email protected]> writes:\n> [email protected] (Angel Alvarez) writes:\n>> there is not so 'more optimal' thing but a simple 'better' thing.\n\n> If I wanted to be pedantic about it, I'd say that the word \"nearly\" is\n> missing.\n\n> That is, it would be \"strictly correct\" if one instead said \"more\n> nearly optimal.\"\n\nThis sort of construction is longstanding practice in English anyway.\nThe most famous example I can think of offhand is in the US\nConstitution: \"... in order to form a more perfect union ...\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 May 2009 13:08:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: superlative missuse " }, { "msg_contents": "El Miércoles, 13 de Mayo de 2009 Tom Lane escribió:\n> Chris Browne <[email protected]> writes:\n> > [email protected] (Angel Alvarez) writes:\n> >> there is not so 'more optimal' thing but a simple 'better' thing.\n> \n> > If I wanted to be pedantic about it, I'd say that the word \"nearly\" is\n> > missing.\n> \n> > That is, it would be \"strictly correct\" if one instead said \"more\n> > nearly optimal.\"\n> \n> This sort of construction is longstanding practice in English anyway.\n> The most famous example I can think of offhand is in the US\n> Constitution: \"... in order to form a more perfect union ...\"\n\nWooa!!\n\nSo \"Tom lane for President\" still applies!! :-)\n\nThanks all of you. \n\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n-- \nNo imprima este correo si no es necesario. El medio ambiente está en nuestras manos.\n->>--------------------------------------------------\n\n Angel J. Alvarez Miguel, Sección de Sistemas \n Area de Explotación, Servicios Informáticos\n \n Edificio Torre de Control, Campus Externo UAH\n Alcalá de Henares 28806, Madrid ** ESPAÑA **\n \n RedIRIS Jabber: [email protected]\n------------------------------------[www.uah.es]-<<-- \nTú lo compras, yo lo copio. Todo legal.\n-- \nAgua para todo? No, Agua para Todos.\n->>-----------------------------------------------\n Clist UAH a.k.a Angel\n---------------------------------[www.uah.es]-<<--\n\nNo le daría Cocacola Zero, ni a mi peor enemigo. Para eso está el gas Mostaza que es mas piadoso.\n", "msg_date": "Wed, 13 May 2009 20:36:17 +0200", "msg_from": "Angel Alvarez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: superlative missuse" }, { "msg_contents": "David Wilson wrote:\n> On Tue, May 12, 2009 at 5:53 PM, Angel Alvarez <[email protected]> wrote:\n> \n>> we suffer a 'more optimal' superlative missuse\n>>\n>> there is not so 'more optimal' thing but a simple 'better' thing.\n>>\n>> im not native english speaker but i think it still applies.\n>>\n>> Well this a superlative list so all of you deserve a better \"optimal\" use.\n> \n> As a native english speaker:\n> \n> You are technically correct. However, \"more optimal\" has a\n> well-understood meaning as \"closer to optimal\", and as such is\n> appropriate and generally acceptable despite being technically\n> incorrect.\n\nI disagree -- it's a glaring error. \"More optimized\" or \"better optimized\" are perfectly good, and correct, phrases. Why not use them? Every time I read \"more optimal,\" I am embarrassed for the person who is showing his/her ignorance of the basics of English grammar. If I wrote, \"It's more best,\" would you find that acceptable?\n\n> This is a postgres mailing list, not an english grammar mailing list...\n\nSince you replied on the list, it's only appropriate to get at least one rebuttal.\n\nCraig\n", "msg_date": "Thu, 14 May 2009 18:08:06 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: superlative missuse" }, { "msg_contents": "On Thu, May 14, 2009 at 9:08 PM, Craig James <[email protected]> wrote:\n\n> I disagree -- it's a glaring error.  \"More optimized\" or \"better optimized\"\n> are perfectly good, and correct, phrases.  Why not use them?  Every time I\n> read \"more optimal,\" I am embarrassed for the person who is showing his/her\n> ignorance of the basics of English grammar.  If I wrote, \"It's more best,\"\n> would you find that acceptable?\n\nOh, I agree it's an error- and it's one I personally avoid. But\nunfortunately, it's remarkably common and has been for some time- as\nTom pointed out with the quote from the US Constitution. On the other\nhand, \"more best\" is more clearly a mistake because of the presence of\n\"better\" as an alternative that is both correct and commonly used.\n\"More optimized\" is infrequent enough to slip by a little more easily.\n\n> Since you replied on the list, it's only appropriate to get at least one\n> rebuttal.\n\nAs is, of course, your certain right. I think that's enough on the\nlist, though I'd be happy to continue off-list if there's any\ninterest. :)\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Thu, 14 May 2009 21:21:44 -0400", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: superlative missuse" } ]
[ { "msg_contents": "Hi\n\nI am wondering what stripe size, on a raid 0, is the most suitable for \npostgres 8.2?\n\nI read a performance tutorial by Bruce Momjian and it suggest setting \nthe stripe size to the same block size (as pg uses?)\n( http://momjian.us/main/writings/pgsql/hw_performance/index.html )\nBut I want to check whether I have understood this correctly.\n\nAre there any other hot confguration tips I should consider or does \nanybody have any suggestions for other raid configuration articles?\n\nregards\n\nthomas\n", "msg_date": "Wed, 13 May 2009 09:51:29 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "raid setup for db" }, { "msg_contents": "On Wed, May 13, 2009 at 1:51 AM, Thomas Finneid <[email protected]> wrote:\n> Hi\n>\n> I am wondering what stripe size, on a raid 0, is the most suitable for\n> postgres 8.2?\n>\n> I read a performance tutorial by Bruce Momjian and it suggest setting the\n> stripe size to the same block size (as pg uses?)\n> ( http://momjian.us/main/writings/pgsql/hw_performance/index.html )\n> But I want to check whether I have understood this correctly.\n>\n> Are there any other hot confguration tips I should consider or does anybody\n> have any suggestions for other raid configuration articles?\n\n8k is a pretty small stripe size. If you do anything that needs seq\nscans, a larger stripe size generally helps. It's probably more\nimportant to align the file system's block size to the underlying RAID\nthan to worry about the pg block size too much. Common values that\nhave been posted here running fast for both random and sequential\naccess have been in the 32k to 256k range with some outliers on the\nhigh end doing well.\n", "msg_date": "Wed, 13 May 2009 02:28:48 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid setup for db" }, { "msg_contents": "Thomas Finneid wrote:\n> Hi\n> \n> I am wondering what stripe size, on a raid 0, is the most suitable for\n> postgres 8.2?\n> \n\nHello\n\nRaid 0 for a database? This is a disaster waiting to happen.\nAre you sure you want to use raid0?\n\nregards\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Wed, 13 May 2009 10:44:18 +0200", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid setup for db" }, { "msg_contents": "\nDont worry about it, this is just for performance testing.\n\nthomas\n\n> Thomas Finneid wrote:\n>> Hi\n>>\n>> I am wondering what stripe size, on a raid 0, is the most suitable for\n>> postgres 8.2?\n>>\n>\n> Hello\n>\n> Raid 0 for a database? This is a disaster waiting to happen.\n> Are you sure you want to use raid0?\n>\n> regards\n> --\n> Rafael Martinez, <[email protected]>\n> Center for Information Technology Services\n> University of Oslo, Norway\n>\n> PGP Public Key: http://folk.uio.no/rafael/\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Wed, 13 May 2009 11:50:00 +0200 (CEST)", "msg_from": "\"Thomas Finneid\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid setup for db" } ]
[ { "msg_contents": " I was glad to find in 8.3.7 that pg was now smart enough to use an index \n for a simple UNION ALL. But unfortunately, it's not quite there yet for \n our use case.\n\n Consider the following dummy tables:\n\n create table foo (id serial primary key, val int not null);\n create table bar (id serial primary key, val int not null);\n create table baz (id1 serial primary key, id2 int not null);\n\n insert into foo (val) select x from generate_series(0,10000) as x;\n insert into bar (val) select x from generate_series(0,10000) as x;\n insert into baz (id2) select x from generate_series(0,10000) as x;\n\n This query correctly uses the primary key indexes on foo and bar:\n\n explain analyze select * from baz join (\n select id, val from foo\n union all\n select id, val from bar\n ) as foobar on(baz.id2=foobar.id) where baz.id1=42;\n\n But if I add a constant-valued column to indicate which branch of the \n union each result came from:\n\n explain analyze select * from baz join (\n select id, val, 'foo'::text as source from foo\n union all\n select id, val, 'bar'::text as source from bar\n ) as foobar on(baz.id2=foobar.id) where baz.id1=42;\n\n All of a sudden it insists on a sequential scan (and takes 800 times as \n long to run) even when enable_seqscan is set false. Is there a good \n reason for this, or is it just a missed opportunity in the optimizer?\n", "msg_date": "Thu, 14 May 2009 10:37:04 -0400", "msg_from": "\"Brad Jorsch\" <[email protected]>", "msg_from_op": true, "msg_subject": "UNION ALL and sequential scans" }, { "msg_contents": "\"Brad Jorsch\" <[email protected]> writes:\n> But if I add a constant-valued column to indicate which branch of the \n> union each result came from:\n\n> explain analyze select * from baz join (\n> select id, val, 'foo'::text as source from foo\n> union all\n> select id, val, 'bar'::text as source from bar\n> ) as foobar on(baz.id2=foobar.id) where baz.id1=42;\n\n> All of a sudden it insists on a sequential scan (and takes 800 times as \n> long to run) even when enable_seqscan is set false. Is there a good \n> reason for this, or is it just a missed opportunity in the optimizer?\n\nIt's an ancient and fundamental limitation that is fixed in 8.4.\nDo not expect to see it fixed in 8.3.x.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 May 2009 10:52:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION ALL and sequential scans " }, { "msg_contents": "On Thu, May 14, 2009 at 4:52 PM, Tom Lane <[email protected]> wrote:\n> \"Brad Jorsch\" <[email protected]> writes:\n>>  But if I add a constant-valued column to indicate which branch of the\n>>  union each result came from:\n>\n>>  explain analyze select * from baz join (\n>>  select id, val, 'foo'::text as source from foo\n>>  union all\n>>  select id, val, 'bar'::text as source from bar\n>>  ) as foobar on(baz.id2=foobar.id) where baz.id1=42;\n>\n>>  All of a sudden it insists on a sequential scan (and takes 800 times as\n>>  long to run) even when enable_seqscan is set false. Is there a good\n>>  reason for this, or is it just a missed opportunity in the optimizer?\n>\n> It's an ancient and fundamental limitation that is fixed in 8.4.\n> Do not expect to see it fixed in 8.3.x.\n\nDoes this also apply to the case of a join on an inherited table ?\n\nexample: http://archives.postgresql.org/pgsql-performance/2003-10/msg00018.php\n\nKind regards,\n\nMathieu\n", "msg_date": "Thu, 14 May 2009 17:10:29 +0200", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION ALL and sequential scans" }, { "msg_contents": "Mathieu De Zutter <[email protected]> writes:\n> On Thu, May 14, 2009 at 4:52 PM, Tom Lane <[email protected]> wrote:\n>> It's an ancient and fundamental limitation that is fixed in 8.4.\n>> Do not expect to see it fixed in 8.3.x.\n\n> Does this also apply to the case of a join on an inherited table ?\n\n> example: http://archives.postgresql.org/pgsql-performance/2003-10/msg00018.php\n\nWell, the particular issue described in that message is long gone.\nWhat Brad is complaining about is non-strict expressions in the\noutputs of append-relation members. An inheritance tree also forms\nan append-relation, but AFAIK there is no way to have anything but\nsimple Vars in its outputs so the case wouldn't arise. Do you have\na specific problem example in mind?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 May 2009 11:20:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION ALL and sequential scans " } ]
[ { "msg_contents": "I just upgraded from a\n\n2xIntel Xeon-Harpertown 5450-Quadcore,16 GB,Redhat EL 5.1-64\nTo\n2xIntel Xeon-Nehalem 5570-Quadcore,36GB,Redhat EL 5.3-64\n \nAny advice on how I'll get the best of this server?\n\nThis is what I currently have:\nmax_connections = 100\nshared_buffers = 2048MB\nmaintenance_work_mem = 128MB\nmax_fsm_pages = 1000000\nbgwriter_delay = 10ms\nbgwriter_lru_maxpages = 0\nwal_sync_method = open_sync\nwal_buffers = 2MB\ncheckpoint_segments = 64\ncheckpoint_warning = 270s\narchive_mode = on\neffective_cache_size = 5400MB\ndefault_statistics_target = 500\nlogging_collector = on\nlog_rotation_size = 100MB\nlog_min_duration_statement = 1000\nlog_connections = off\nlog_disconnections = off\nlog_duration = off\nlog_line_prefix = '%t'\ntrack_counts = on\nautovacuum = on\nstatement_timeout = 5min\n\n\n", "msg_date": "Wed, 20 May 2009 12:22:21 -0400", "msg_from": "Kobby Dapaah <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql.conf suggestions?" }, { "msg_contents": "On Wed, May 20, 2009 at 12:22 PM, Kobby Dapaah <[email protected]> wrote:\n> I just upgraded from a\n>\n> 2xIntel Xeon-Harpertown 5450-Quadcore,16 GB,Redhat EL 5.1-64\n> To\n> 2xIntel Xeon-Nehalem 5570-Quadcore,36GB,Redhat EL 5.3-64\n>\n> Any advice on how I'll get the best of this server?\n>\n> This is what I currently have:\n> max_connections = 100\n> shared_buffers = 2048MB\n> maintenance_work_mem = 128MB\n> max_fsm_pages = 1000000\n> bgwriter_delay = 10ms\n> bgwriter_lru_maxpages = 0\n> wal_sync_method = open_sync\n> wal_buffers = 2MB\n> checkpoint_segments = 64\n> checkpoint_warning = 270s\n> archive_mode = on\n> effective_cache_size = 5400MB\n> default_statistics_target = 500\n> logging_collector = on\n> log_rotation_size = 100MB\n> log_min_duration_statement = 1000\n> log_connections = off\n> log_disconnections = off\n> log_duration = off\n> log_line_prefix = '%t'\n> track_counts = on\n> autovacuum = on\n> statement_timeout = 5min\n\nWhat are you doing with it? You probably want to increase work_mem,\nand very likely you want to decrease random_page_cost and\nseq_page_cost. I think there may be some more bgwriter settings that\nyou want to fiddle as well but that's not my area of expertise.\n\n...Robert\n", "msg_date": "Wed, 20 May 2009 23:09:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql.conf suggestions?" }, { "msg_contents": "On Wed, 20 May 2009, Kobby Dapaah wrote:\n\n> shared_buffers = 2048MB\n> effective_cache_size = 5400MB\n\nYou should consider seriously increasing effective_cache_size. You might \nalso double or quadruple shared_buffers from 2GB, but going much higher \nmay not buy you much--most people seem to find diminishing returns \nincreasing that beyond the 10GB range.\n\n> bgwriter_delay = 10ms\n> bgwriter_lru_maxpages = 0\n\nI found that bg_writer_delay doesn't really work so well when set to this \nsmall. It doesn't actually matter right now though, because you're \nturning the background writer off by setting brwriter_lru_maxpages=0. \nJust wanted to point this out because if you increase that later it may \npop up as a concern, small values for the delay make it more likely you'll \nneed to increase the multiplier to a higher value for background writing \nto work well.\n\nRobert's message already mentions the other things you should consider, \nparticularly work_mem.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 21 May 2009 01:47:52 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql.conf suggestions?" } ]
[ { "msg_contents": "Hi All,\n\nNot sure if this is the right pgsql-* \"channel\" to post to, but I was \nhoping maybe someone could answer a question from one of my fellow \ndevelopers. Please read below:\n\n> So, following the documentation, we wrote a little ASYNC version of \n> exec. here is the code:\n>\n> PGresult *PGClient::exec(char *query) {\n> PGresult *result;\n>\n> if (conn == NULL) {\n> ERROR(\"PGClient calling exec when a connection hasn't been \n> established yet\");\n> return NULL;\n> }\n> if (PQsendQuery(conn,query)==0) {\n> ERROR(\"PGClient wasn't able to sendQuery\");\n> return NULL;\n> }\n> int socketFD = PQsocket(conn);\n> pollfd pfd;\n> pfd.fd = socketFD;\n> pfd.events = POLLIN|POLLRDHUP;\n> consumeSome:\n> if (poll(&pfd,1,PG_EXEC_TIMEOUT*1000) == 0) {\n> ERROR(\"PGClient is timing out \");\n> return NULL;\n> }\n> if (PQconsumeInput(conn)==0) {\n> ERROR(\"PGClient detected trouble when trying to consumeInput\");\n> return NULL;\n> }\n> if (PQisBusy(conn))\n> goto consumeSome;\n> result = PQgetResult(conn);\n> if (PQisBusy(conn)) {\n> // something is wrong, because this is telling us that there is \n> more data on the way,\n> but there shouldn't be.\n> ERROR(\"PGClient had a weird case where more data is on its way\");\n> }\n> PGresult *tempResult=PQgetResult(conn);\n> if (tempResult!=0) {\n> // we either had multiple SQL queries that return multiple \n> results or something weird\n> happened here, caller should close connection\n> ERROR(\"PGClient had a weird case where multiple results were \n> returned\");\n> }\n> return result;\n> }\n>\n> So, assuming that every query we pass in is just a simple, 1 result- \n> set-returning query, we should never see PQisBusy returning a non- \n> NULL after we do the PQgetResult. BUT every now and then, in our \n> code, the PQisBusy function returns a non-NULL and we get that \n> ERROR(\"PGClient had a weird case where more data is on its way\")... \n> BUT when we do tempResult=PQgetResult, then it is ALWAYS null... can \n> someone please explain how PQisBusy can return true (when we aren't \n> expecting it to) but then PQgetResult returns nothing?\n\nIf there's any knowledge of why PQisBusy returns not-null, yet nothign \ncomes our of PQgetResult, could you let me know what's going on?\n\nAny help much appreciated!\n--Richard\n", "msg_date": "Wed, 20 May 2009 17:07:09 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "PQisBusy behaving strangely" } ]
[ { "msg_contents": "Hi all, iam having trouble with a query, in this query we have parameters, to indicate the starting month and the ending month (commented line in the query).\n\nYou can see the explain using the parameters of month from 1 to 6\n\nEXPLAIN ANALYZE\nselect pq.nome, mv.data, mv.valor,\n gprd.tipo,\n tprd.tipo,\n prd.nome,\n qtd.ano, qtd.mes, qtd.sum,\n rct.sum,\n rcb.ano, rcb.sigla, rcb.n_recibo, rcb.data, rcb.ncontrib, rcb.nome_cl, rcb.morada_cl, rcb.codpostal_cl\n from parques pq,\n movimentos mv left join a_mov_rcb amr on mv.idmovimento=amr.idmov left join recibos rcb on amr.idrecibo=rcb.idrecibo,\n (select idmov,ano,mes,idasso,sum(valor) from receitas group by 1,2,3,4) rct,\n (select idmov,ano,mes,idasso,sum(quantidade) from quantidades group by 1,2,3,4) qtd,\n produtos prd,\n tipoprodutos tprd,\n grp_prod gprd,\n a_prk_prod app\n where pq.idparque=mv.idparque\n and pq.idparque=10\n and rct.ano=2009\n and rct.mes between 1 and 6 /* HERE IS THE STARTING AND ENDING MONTH */\n and mv.idtipo_mv=21\n and mv.vivo\n and mv.idmovimento=rct.idmov\n and rct.idmov=qtd.idmov\n and rct.idasso=qtd.idasso\n and rct.ano=qtd.ano\n and rct.mes=qtd.mes\n and rct.idasso=app.idasso and app.idproduto=prd.idproduto\n and prd.idtipoproduto=tprd.idtipoproduto\n and prd.idgrp_prod=gprd.idgrp_prod\n order by mv.data, prd.idproduto, gprd.idgrp_prod, rcb.sigla, rcb.n_recibo, qtd.ano, qtd.mes\n\n\n\n\n\"Sort (cost=23852.81..23852.82 rows=1 width=526) (actual time=339.156..339.197 rows=146 loops=1)\"\n\" Sort Key: mv.data, prd.idproduto, gprd.idgrp_prod, rcb.sigla, rcb.n_recibo, qtd.ano, qtd.mes\"\n\" -> Nested Loop (cost=23682.31..23852.80 rows=1 width=526) (actual time=319.009..338.801 rows=146 loops=1)\"\n\" -> Nested Loop (cost=23682.31..23851.49 rows=1 width=462) (actual time=318.986..337.758 rows=146 loops=1)\"\n\" -> Nested Loop (cost=23682.31..23851.18 rows=1 width=334) (actual time=318.952..337.159 rows=146 loops=1)\"\n\" -> Nested Loop (cost=23682.31..23850.87 rows=1 width=202) (actual time=318.917..336.602 rows=146 loops=1)\"\n\" -> Nested Loop (cost=23682.31..23849.43 rows=1 width=126) (actual time=318.880..335.960 rows=146 loops=1)\"\n\" -> Hash Join (cost=23682.31..23841.15 rows=1 width=130) (actual time=318.809..335.161 rows=146 loops=1)\"\n\" Hash Cond: ((rct.idmov = mv.idmovimento) AND (rct.idasso = qtd.idasso) AND (rct.mes = qtd.mes))\"\n\" -> HashAggregate (cost=5143.05..5201.88 rows=4706 width=24) (actual time=69.150..79.543 rows=14972 loops=1)\"\n\" -> Seq Scan on receitas (cost=0.00..5033.23 rows=8786 width=24) (actual time=0.236..55.824 rows=15668 loops=1)\"\n\" Filter: ((ano = 2009) AND (mes >= 1) AND (mes <= 6))\"\n\" -> Hash (cost=18539.12..18539.12 rows=8 width=126) (actual time=249.418..249.418 rows=146 loops=1)\"\n\" -> Hash Join (cost=18332.76..18539.12 rows=8 width=126) (actual time=232.701..249.272 rows=146 loops=1)\"\n\" Hash Cond: (qtd.idmov = mv.idmovimento)\"\n\" -> HashAggregate (cost=3716.55..3810.31 rows=7501 width=24) (actual time=61.735..72.593 rows=15497 loops=1)\"\n\" -> Seq Scan on quantidades (cost=0.00..3526.18 rows=15230 width=24) (actual time=0.223..48.616 rows=15750 loops=1)\"\n\" Filter: (ano = 2009)\"\n\" -> Hash (cost=14588.99..14588.99 rows=2178 width=102) (actual time=170.719..170.719 rows=2559 loops=1)\"\n\" -> Hash Left Join (cost=7052.05..14588.99 rows=2178 width=102) (actual time=166.942..169.261 rows=2559 loops=1)\"\n\" Hash Cond: (amr.idrecibo = rcb.idrecibo)\"\n\" -> Hash Left Join (cost=4706.50..11472.92 rows=2178 width=24) (actual time=77.667..93.502 rows=2559 loops=1)\"\n\" Hash Cond: (mv.idmovimento = amr.idmov)\"\n\" -> Bitmap Heap Scan on movimentos mv (cost=3058.71..9558.85 rows=2178 width=20) (actual time=28.338..35.229 rows=2559 loops=1)\"\n\" Recheck Cond: ((idtipo_mv = 21) AND (10 = idparque))\"\n\" Filter: vivo\"\n\" -> BitmapAnd (cost=3058.71..3058.71 rows=2205 width=0) (actual time=28.196..28.196 rows=0 loops=1)\"\n\" -> Bitmap Index Scan on idx_03_idtipo_mv (cost=0.00..583.08 rows=33416 width=0) (actual time=7.307..7.307 rows=46019 loops=1)\"\n\" Index Cond: (idtipo_mv = 21)\"\n\" -> Bitmap Index Scan on idx_02_idparque (cost=0.00..2474.29 rows=141577 width=0) (actual time=19.948..19.948 rows=136676 loops=1)\"\n\" Index Cond: (10 = idparque)\"\n\" -> Hash (cost=812.13..812.13 rows=49413 width=8) (actual time=49.178..49.178 rows=49385 loops=1)\"\n\" -> Seq Scan on a_mov_rcb amr (cost=0.00..812.13 rows=49413 width=8) (actual time=0.069..24.160 rows=49385 loops=1)\"\n\" -> Hash (cost=1030.13..1030.13 rows=49313 width=86) (actual time=69.384..69.384 rows=49348 loops=1)\"\n\" -> Seq Scan on recibos rcb (cost=0.00..1030.13 rows=49313 width=86) (actual time=0.018..31.965 rows=49348 loops=1)\"\n\" -> Index Scan using asso_prk_prod_pkey on a_prk_prod app (cost=0.00..8.27 rows=1 width=8) (actual time=0.003..0.004 rows=1 loops=146)\"\n\" Index Cond: (rct.idasso = app.idasso)\"\n\" -> Index Scan using produtos_pkey on produtos prd (cost=0.00..1.43 rows=1 width=80) (actual time=0.002..0.003 rows=1 loops=146)\"\n\" Index Cond: (app.idproduto = prd.idproduto)\"\n\" -> Index Scan using grp_prod_pkey on grp_prod gprd (cost=0.00..0.30 rows=1 width=136) (actual time=0.002..0.002 rows=1 loops=146)\"\n\" Index Cond: (prd.idgrp_prod = gprd.idgrp_prod)\"\n\" -> Index Scan using tipoprodutos_pkey on tipoprodutos tprd (cost=0.00..0.30 rows=1 width=136) (actual time=0.002..0.002 rows=1 loops=146)\"\n\" Index Cond: (prd.idtipoproduto = tprd.idtipoproduto)\"\n\" -> Seq Scan on parques pq (cost=0.00..1.30 rows=1 width=72) (actual time=0.002..0.005 rows=1 loops=146)\"\n\" Filter: (idparque = 10)\"\n\"Total runtime: 339.973 ms\"\n\nnow here is the explain using the parameters from 1 to 4.\n\n\"Sort (cost=23944.24..23944.24 rows=1 width=526) (actual time=1887457.197..1887457.241 rows=124 loops=1)\"\n\" Sort Key: mv.data, prd.idproduto, gprd.idgrp_prod, rcb.sigla, rcb.n_recibo, qtd.ano, qtd.mes\"\n\" -> Nested Loop (cost=16068.57..23944.23 rows=1 width=526) (actual time=34392.436..1887456.339 rows=124 loops=1)\"\n\" Join Filter: (qtd.idmov = mv.idmovimento)\"\n\" -> Nested Loop (cost=9016.52..9328.02 rows=1 width=444) (actual time=156.601..834.424 rows=12586 loops=1)\"\n\" -> Nested Loop (cost=9016.52..9327.70 rows=1 width=316) (actual time=156.572..678.675 rows=12586 loops=1)\"\n\" -> Nested Loop (cost=9016.52..9327.39 rows=1 width=184) (actual time=156.544..579.804 rows=12586 loops=1)\"\n\" -> Hash Join (cost=9016.52..9325.95 rows=1 width=108) (actual time=156.501..304.851 rows=12586 loops=1)\"\n\" Hash Cond: ((qtd.idasso = app.idasso) AND (qtd.idmov = rct.idmov) AND (qtd.mes = rct.mes))\"\n\" -> HashAggregate (cost=3716.55..3810.31 rows=7501 width=24) (actual time=59.596..106.009 rows=15507 loops=1)\"\n\" -> Seq Scan on quantidades (cost=0.00..3526.18 rows=15230 width=24) (actual time=0.203..46.900 rows=15760 loops=1)\"\n\" Filter: (ano = 2009)\"\n\" -> Hash (cost=5250.56..5250.56 rows=2824 width=104) (actual time=96.788..96.788 rows=12597 loops=1)\"\n\" -> Hash Join (cost=5148.19..5250.56 rows=2824 width=104) (actual time=64.588..85.241 rows=12597 loops=1)\"\n\" Hash Cond: (rct.idasso = app.idasso)\"\n\" -> HashAggregate (cost=5099.11..5134.41 rows=2824 width=24) (actual time=62.779..71.578 rows=12597 loops=1)\"\n\" -> Seq Scan on receitas (cost=0.00..5033.23 rows=5271 width=24) (actual time=11.277..51.444 rows=13173 loops=1)\"\n\" Filter: ((ano = 2009) AND (mes >= 1) AND (mes <= 4))\"\n\" -> Hash (cost=34.16..34.16 rows=1193 width=80) (actual time=1.790..1.790 rows=1193 loops=1)\"\n\" -> Nested Loop (cost=0.00..34.16 rows=1193 width=80) (actual time=0.025..1.186 rows=1193 loops=1)\"\n\" -> Seq Scan on parques pq (cost=0.00..1.30 rows=1 width=72) (actual time=0.012..0.016 rows=1 loops=1)\"\n\" Filter: (idparque = 10)\"\n\" -> Seq Scan on a_prk_prod app (cost=0.00..20.93 rows=1193 width=8) (actual time=0.006..0.402 rows=1193 loops=1)\"\n\" -> Index Scan using produtos_pkey on produtos prd (cost=0.00..1.43 rows=1 width=80) (actual time=0.015..0.016 rows=1 loops=12586)\"\n\" Index Cond: (app.idproduto = prd.idproduto)\"\n\" -> Index Scan using grp_prod_pkey on grp_prod gprd (cost=0.00..0.30 rows=1 width=136) (actual time=0.003..0.005 rows=1 loops=12586)\"\n\" Index Cond: (prd.idgrp_prod = gprd.idgrp_prod)\"\n\" -> Index Scan using tipoprodutos_pkey on tipoprodutos tprd (cost=0.00..0.30 rows=1 width=136) (actual time=0.003..0.009 rows=1 loops=12586)\"\n\" Index Cond: (prd.idtipoproduto = tprd.idtipoproduto)\"\n\" -> Hash Left Join (cost=7052.05..14588.99 rows=2178 width=102) (actual time=146.667..148.973 rows=2559 loops=12586)\"\n\" Hash Cond: (amr.idrecibo = rcb.idrecibo)\"\n\" -> Hash Left Join (cost=4706.50..11472.92 rows=2178 width=24) (actual time=68.974..79.270 rows=2559 loops=12586)\"\n\" Hash Cond: (mv.idmovimento = amr.idmov)\"\n\" -> Bitmap Heap Scan on movimentos mv (cost=3058.71..9558.85 rows=2178 width=20) (actual time=23.592..25.603 rows=2559 loops=12586)\"\n\" Recheck Cond: ((idtipo_mv = 21) AND (10 = idparque))\"\n\" Filter: vivo\"\n\" -> BitmapAnd (cost=3058.71..3058.71 rows=2205 width=0) (actual time=23.474..23.474 rows=0 loops=12586)\"\n\" -> Bitmap Index Scan on idx_03_idtipo_mv (cost=0.00..583.08 rows=33416 width=0) (actual time=6.003..6.003 rows=46024 loops=12586)\"\n\" Index Cond: (idtipo_mv = 21)\"\n\" -> Bitmap Index Scan on idx_02_idparque (cost=0.00..2474.29 rows=141577 width=0) (actual time=16.570..16.570 rows=136676 loops=12586)\"\n\" Index Cond: (10 = idparque)\"\n\" -> Hash (cost=812.13..812.13 rows=49413 width=8) (actual time=45.280..45.280 rows=49387 loops=12586)\"\n\" -> Seq Scan on a_mov_rcb amr (cost=0.00..812.13 rows=49413 width=8) (actual time=0.010..21.042 rows=49387 loops=12586)\"\n\" -> Hash (cost=1030.13..1030.13 rows=49313 width=86) (actual time=63.760..63.760 rows=49350 loops=12586)\"\n\" -> Seq Scan on recibos rcb (cost=0.00..1030.13 rows=49313 width=86) (actual time=0.006..26.959 rows=49350 loops=12586)\"\n\"Total runtime: 1887457.849 ms\"\n\nhas we can see the query planner, decided to do sequencial scan in \"a_mov_rcb\" table and \"recibos\", when i set the flag \"enable_seqscan\" to false all goes well.\n\nAnyways i could probably set the \"enable_seqscan\" always of but i dont know if thats a good idea, because if it was that would be set as off by default.\n\nIs there anything i could do to go around this?\nOr can anyone give me a hint why query planner goes sequencial scan when i change the parameters.\n\n\nThanks in advanced\n\n\n", "msg_date": "Thu, 21 May 2009 12:42:46 +0100", "msg_from": "Daniel Ferreira <[email protected]>", "msg_from_op": true, "msg_subject": "query planner uses sequencial scan instead of index scan" }, { "msg_contents": "Daniel Ferreira <[email protected]> writes:\n> has we can see the query planner, decided to do sequencial scan in \"a_mov_rcb\" table and \"recibos\", when i set the flag \"enable_seqscan\" to false all goes well.\n\nIt's not really the seqscan that's the problem. The problem is this\nrowcount misestimate:\n\n> \" -> Hash Join (cost=9016.52..9325.95 rows=1 width=108) (actual time=156.501..304.851 rows=12586 loops=1)\"\n> \" Hash Cond: ((qtd.idasso = app.idasso) AND (qtd.idmov = rct.idmov) AND (qtd.mes = rct.mes))\"\n\nwhich is causing the planner to suppose that the remaining joins should\nbe done as nestloops. That would be the right thing if there really\nwere only one row... with twelve thousand of them, it's taking\ntwelve thousand times longer than the planner expected.\n\nThe right fix would be to get the estimate to be better. (Even if it\nwere 5 or 10 rows the planner would probably avoid the nestloops.)\nBut I'm not sure how much you can improve it by raising the stats\ntargets. This is trying to estimate the size of the join between\ntwo GROUP BY subselects, and the planner is not tremendously good\nat that.\n\nA brute force solution might go like this:\n\n1. Select the two GROUP BY sub-results into temp tables.\n\n2. ANALYZE the temp tables.\n\n3. Do the original query using the temp tables.\n\nBut it's a pain ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 May 2009 11:41:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner uses sequencial scan instead of index scan " } ]
[ { "msg_contents": "Hello,\n\ti have to buy a new server and in the budget i have (small) i have to select \none of this two options:\n\n-4 sas 146gb 15k rpm raid10.\n-8 sas 146gb 10k rpm raid10.\n\nThe server would not be only dedicated to postgresql but to be a file server, \nthe rest of options like plenty of ram and battery backed cache raid card are \ndone but this two different hard disk configuration have the same price and i am \nnot sure what it is better.\n\nIf the best option it is different for postgresql that for a file server i would \nlike to know too, thanks.\n\nRegards,\nMiguel Angel.\n", "msg_date": "Thu, 21 May 2009 14:47:56 +0200", "msg_from": "Linos <[email protected]>", "msg_from_op": true, "msg_subject": "raid10 hard disk choice" }, { "msg_contents": "On Thu, 21 May 2009, Linos wrote:\n> \ti have to buy a new server and in the budget i have (small) i have to \n> select one of this two options:\n>\n> -4 sas 146gb 15k rpm raid10.\n> -8 sas 146gb 10k rpm raid10.\n\nIt depends what you are doing. I think in most situations, the second \noption is better, but there may be a few situations where the reverse is \ntrue.\n\nBasically, the first option will only be faster if you are doing lots of \nseeking (small requests) in a single thread. As soon as you go \nmulti-threaded or are looking at sequential scans, you're better off with \nmore discs.\n\nMatthew\n\n-- \n Experience is what allows you to recognise a mistake the second time you\n make it.\n", "msg_date": "Thu, 21 May 2009 13:59:20 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Thu, May 21, 2009 at 8:47 AM, Linos <[email protected]> wrote:\n> Hello,\n>        i have to buy a new server and in the budget i have (small) i have to\n> select one of this two options:\n>\n> -4 sas 146gb 15k rpm raid10.\n> -8 sas 146gb 10k rpm raid10.\n>\n> The server would not be only dedicated to postgresql but to be a file\n> server, the rest of options like plenty of ram and battery backed cache raid\n> card are done but this two different hard disk configuration have the same\n> price and i am not sure what it is better.\n>\n> If the best option it is different for postgresql that for a file server i\n> would like to know too, thanks.\n\nI would say go with the 10k drives. more space, flexibility (you can\ndedicate a volume to WAL), and more total performance on paper. I\nwould also, if you can afford it and they fit, get two small sata\ndrives, mount raid 1 and put the o/s on those.\n\nmerlin\n", "msg_date": "Thu, 21 May 2009 10:25:32 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "\nMatthew Wakeling wrote:\n> On Thu, 21 May 2009, Linos wrote:\n>> i have to buy a new server and in the budget i have (small) i \n>> have to select one of this two options:\n>>\n>> -4 sas 146gb 15k rpm raid10.\n>> -8 sas 146gb 10k rpm raid10.\n>\n> It depends what you are doing. I think in most situations, the second \n> option is better, but there may be a few situations where the reverse \n> is true.\n>\n> Basically, the first option will only be faster if you are doing lots \n> of seeking (small requests) in a single thread. As soon as you go \n> multi-threaded or are looking at sequential scans, you're better off \n> with more discs.\n>\n> Matthew\n>\nI agree. I think you would be better off with more disks I know from \nmy own experience when I went from 8 73gb 15k drives to 16 73gb 15k \ndrives I noticed a big difference in the amount of time it took to run \nmy queries. I can't give you hard numbers but most of my queries take \nhours to run so you tend to notice when they finish 20-30 minutes \nsooner. The second option also doubles your capacity which in general \nis a good idea. It's always easier to slightly overbuild than try and \nfix a storage problem.\n\nMight I also suggest that you pick up at least one spare drive while \nyou're at it. \n\nThis may be somewhat of a tangent but it speaks to having a spare drive \non hand. A word of warning for anyone out there considering the Seagate \n1.5TB SATA drives (ST31500341AS). (I use them for an off-site backup of \na backup array not pg) I'm going through a fiasco right now with these \ndrives and I wish I had purchased more when I did. I built a backup \narray with 16 of these back in October and it works great. In October \nthese drives shipped with firmware SD17. I needed to add another 16 \ndrive array but the ST31500341AS drives that are currently shipping have \na non-flashable CC1H firmware that will not work on high port count \nAdaptec cards (5XXXX) which is what I have. It is now impossible to \nfind any of these drives with firmware compatible with my controller, \ntrust me I spent a couple hours on the phone with Seagate. When I built \nthe first array I bought a single spare drive. As soon as two drives \ndie I'm going to be in the position of having to either scrap all of \nthem or buy a new controller that will work with the new firmware. If I \nhadn't bought that extra drive the array would be dead as soon as one of \nthe drives goes.\n\nMy point is... if you have the means, buy at least one spare while you can.\n\nBob\n\n", "msg_date": "Thu, 21 May 2009 09:34:46 -0500", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Thu, 2009-05-21 at 10:25 -0400, Merlin Moncure wrote:\n> On Thu, May 21, 2009 at 8:47 AM, Linos <[email protected]> wrote:\n> > Hello,\n> > i have to buy a new server and in the budget i have (small) i have to\n> > select one of this two options:\n> >\n> > -4 sas 146gb 15k rpm raid10.\n> > -8 sas 146gb 10k rpm raid10.\n> >\n> > The server would not be only dedicated to postgresql but to be a file\n> > server, the rest of options like plenty of ram and battery backed cache raid\n> > card are done but this two different hard disk configuration have the same\n> > price and i am not sure what it is better.\n> >\n> > If the best option it is different for postgresql that for a file server i\n> > would like to know too, thanks.\n> \n> I would say go with the 10k drives. more space, flexibility (you can\n> dedicate a volume to WAL), and more total performance on paper. I\n> would also, if you can afford it and they fit, get two small sata\n> drives, mount raid 1 and put the o/s on those.\n\n+1 on that.\n\nJoshua D. Drake\n\n\n> \n> merlin\n> \n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Thu, 21 May 2009 08:58:12 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Thu, May 21, 2009 at 8:34 AM, Robert Schnabel <[email protected]> wrote:\n> the phone with Seagate.  When I built the first array I bought a single\n> spare drive.  As soon as two drives die I'm going to be in the position of\n> having to either scrap all of them or buy a new controller that will work\n> with the new firmware.  If I hadn't bought that extra drive the array would\n> be dead as soon as one of the drives goes.\n>\n> My point is... if you have the means, buy at least one spare while you can.\n\nI'd go shopping for more spares on ebay now...\n", "msg_date": "Thu, 21 May 2009 10:39:22 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Thu, 21 May 2009, Linos wrote:\n>> i have to buy a new server and in the budget i have (small) i have \n>> to select one of this two options:\n>>\n>> -4 sas 146gb 15k rpm raid10.\n>> -8 sas 146gb 10k rpm raid10.\n> \n> It depends what you are doing. I think in most situations, the second \n> option is better, but there may be a few situations where the reverse is \n> true.\n> \n> Basically, the first option will only be faster if you are doing lots of \n> seeking (small requests) in a single thread. As soon as you go \n> multi-threaded or are looking at sequential scans, you're better off \n> with more discs.\n\nSince you have to share the disks with a file server, which might be heavily used, the 8-disk array will probably be better even if you're doing lots of seeking in a single thread.\n\nCraig\n", "msg_date": "Thu, 21 May 2009 10:10:31 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Thu, May 21, 2009 at 8:59 AM, Matthew Wakeling <[email protected]> wrote:\n> On Thu, 21 May 2009, Linos wrote:\n>>\n>>        i have to buy a new server and in the budget i have (small) i have\n>> to select one of this two options:\n>>\n>> -4 sas 146gb 15k rpm raid10.\n>> -8 sas 146gb 10k rpm raid10.\n>\n> It depends what you are doing. I think in most situations, the second option\n> is better, but there may be a few situations where the reverse is true.\n\nOne possible case of this - I believe that 15K drives will allow you\nto commit ~250 times per second (15K/60) vs. ~166 times per second\n(10K/60). If you have a lot of small write transactions, this might\nbe an issue.\n\n...Robert\n", "msg_date": "Thu, 21 May 2009 16:29:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Thu, May 21, 2009 at 2:29 PM, Robert Haas <[email protected]> wrote:\n> On Thu, May 21, 2009 at 8:59 AM, Matthew Wakeling <[email protected]> wrote:\n>> On Thu, 21 May 2009, Linos wrote:\n>>>\n>>>        i have to buy a new server and in the budget i have (small) i have\n>>> to select one of this two options:\n>>>\n>>> -4 sas 146gb 15k rpm raid10.\n>>> -8 sas 146gb 10k rpm raid10.\n>>\n>> It depends what you are doing. I think in most situations, the second option\n>> is better, but there may be a few situations where the reverse is true.\n>\n> One possible case of this - I believe that 15K drives will allow you\n> to commit ~250 times per second (15K/60) vs. ~166 times per second\n> (10K/60).  If you have a lot of small write transactions, this might\n> be an issue.\n\nBut in a RAID-10 you aggreate pairs like RAID-0, so you could write\n250(n/2) times per second on 15k where n=4 and 166(n/2) for 10k drives\nwhere n=8. So 500 versus 664... ? Or am I getting it wrong.\n", "msg_date": "Thu, 21 May 2009 15:41:05 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "\nOn 5/21/09 2:41 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> On Thu, May 21, 2009 at 2:29 PM, Robert Haas <[email protected]> wrote:\n>> On Thu, May 21, 2009 at 8:59 AM, Matthew Wakeling <[email protected]>\n>> wrote:\n>>> On Thu, 21 May 2009, Linos wrote:\n>>>> \n>>>>        i have to buy a new server and in the budget i have (small) i have\n>>>> to select one of this two options:\n>>>> \n>>>> -4 sas 146gb 15k rpm raid10.\n>>>> -8 sas 146gb 10k rpm raid10.\n>>> \n>>> It depends what you are doing. I think in most situations, the second option\n>>> is better, but there may be a few situations where the reverse is true.\n>> \n>> One possible case of this - I believe that 15K drives will allow you\n>> to commit ~250 times per second (15K/60) vs. ~166 times per second\n>> (10K/60).  If you have a lot of small write transactions, this might\n>> be an issue.\n> \n> But in a RAID-10 you aggreate pairs like RAID-0, so you could write\n> 250(n/2) times per second on 15k where n=4 and 166(n/2) for 10k drives\n> where n=8. So 500 versus 664... ? Or am I getting it wrong.\n\n From the original message:\n\n\" The server would not be only dedicated to postgresql but to be a file\nserver,\nthe rest of options like plenty of ram and battery backed cache raid card\nare\ndone but this two different hard disk configuration have the same price and\ni am\nnot sure what it is better.\"\n\n\nSo, with a write-back cache battery backed up raid card, xlog writes won't\nbe an issue.\n\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 21 May 2009 15:04:40 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Thu, May 21, 2009 at 5:41 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, May 21, 2009 at 2:29 PM, Robert Haas <[email protected]> wrote:\n>> On Thu, May 21, 2009 at 8:59 AM, Matthew Wakeling <[email protected]> wrote:\n>>> On Thu, 21 May 2009, Linos wrote:\n>>>>\n>>>>        i have to buy a new server and in the budget i have (small) i have\n>>>> to select one of this two options:\n>>>>\n>>>> -4 sas 146gb 15k rpm raid10.\n>>>> -8 sas 146gb 10k rpm raid10.\n>>>\n>>> It depends what you are doing. I think in most situations, the second option\n>>> is better, but there may be a few situations where the reverse is true.\n>>\n>> One possible case of this - I believe that 15K drives will allow you\n>> to commit ~250 times per second (15K/60) vs. ~166 times per second\n>> (10K/60).  If you have a lot of small write transactions, this might\n>> be an issue.\n>\n> But in a RAID-10 you aggreate pairs like RAID-0, so you could write\n> 250(n/2) times per second on 15k where n=4 and 166(n/2) for 10k drives\n> where n=8.  So 500 versus 664... ?  Or am I getting it wrong.\n\nWell, that would be true if every write used a different disk, but I\ndon't think that will be the case in practice. The WAL writes are\nvery small, so often you'll have multiple writes even to the same\nblock. But even if they're to different blocks they're likely to be\nin the same RAID stripe.\n\n...Robert\n", "msg_date": "Thu, 21 May 2009 18:05:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "\nOn 5/21/09 3:05 PM, \"Robert Haas\" <[email protected]> wrote:\n\n> On Thu, May 21, 2009 at 5:41 PM, Scott Marlowe <[email protected]>\n> wrote:\n>> On Thu, May 21, 2009 at 2:29 PM, Robert Haas <[email protected]> wrote:\n>>> On Thu, May 21, 2009 at 8:59 AM, Matthew Wakeling <[email protected]>\n>>> wrote:\n>>>> On Thu, 21 May 2009, Linos wrote:\n>>>>> \n>>>>>        i have to buy a new server and in the budget i have (small) i have\n>>>>> to select one of this two options:\n>>>>> \n>>>>> -4 sas 146gb 15k rpm raid10.\n>>>>> -8 sas 146gb 10k rpm raid10.\n>>>> \n>>>> It depends what you are doing. I think in most situations, the second\n>>>> option\n>>>> is better, but there may be a few situations where the reverse is true.\n>>> \n>>> One possible case of this - I believe that 15K drives will allow you\n>>> to commit ~250 times per second (15K/60) vs. ~166 times per second\n>>> (10K/60).  If you have a lot of small write transactions, this might\n>>> be an issue.\n>> \n>> But in a RAID-10 you aggreate pairs like RAID-0, so you could write\n>> 250(n/2) times per second on 15k where n=4 and 166(n/2) for 10k drives\n>> where n=8.  So 500 versus 664... ?  Or am I getting it wrong.\n> \n> Well, that would be true if every write used a different disk, but I\n> don't think that will be the case in practice. The WAL writes are\n> very small, so often you'll have multiple writes even to the same\n> block. But even if they're to different blocks they're likely to be\n> in the same RAID stripe.\n\nDisk count and stripe size don't have much to do with it, the write cache\nmerges write requests and the client (the wal log write) doesn't have to\nwait on anything. The RAID card can merge and order the writes, so it can\ngo nearly at sequential transfer rate, limited more by other concurrent\npressure on the raid card's cache than anything else.\n\nSince WAL log requests are sequential (but small) this provides huge gains\nand a large multiplier over the raw iops of the drive.\n\n> \n> ...Robert\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 21 May 2009 19:14:20 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Thu, 21 May 2009, Scott Marlowe wrote:\n\n> But in a RAID-10 you aggreate pairs like RAID-0, so you could write\n> 250(n/2) times per second on 15k where n=4 and 166(n/2) for 10k drives\n> where n=8. So 500 versus 664... ? Or am I getting it wrong.\n\nAdding more spindles doesn't improve the fact that the disks can only \ncommit once per revolution. WAL writes are way too fine grained for them \nto get split across stripes to improve the commit rate.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 22 May 2009 02:59:18 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Thu, 21 May 2009, Robert Schnabel wrote:\n\n> A word of warning for anyone out there considering the Seagate 1.5TB \n> SATA drives (ST31500341AS)...I'm going through a fiasco right now with \n> these drives and I wish I had purchased more when I did.\n\nThose drives are involved in the worst firmware debacle Seagate has had in \nyears, so no surprise they're causing problems for you just like so many \nothers. I don't think you came to the right conclusion for how to avoid \nthis pain in the future though--buying more garbage drives isn't really \nsatisfying.\n\nWhat you should realize is to never assemble a production server using \nnewly designed drives. Always stay at least 6 months and at least one \ngeneration behind the state of the art. All the drive manufacturers right \nnow are lucky if they can deliver a reliable 1TB drive, nobody has a \nreliable 1.5TB or larger drive yet. (Check out the miserable user ratings \nfor all the larger capacity drives available right now on sites like \nnewegg.com if you don't believe me) Right now, Seagate's 1.5TB drive is 7 \nmonths old, and I'd still consider it bleeding edge for server use.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 22 May 2009 03:08:08 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "Thanks for all the suggestions i will go with 8 10k disks, well 9 if you count \nthe spare now that i am scared :)\n\nRegards,\nMiguel Angel.\n", "msg_date": "Fri, 22 May 2009 11:46:16 +0200", "msg_from": "Linos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "Greg Smith wrote:\n> On Thu, 21 May 2009, Robert Schnabel wrote:\n>> A word of warning for anyone out there considering the Seagate 1.5TB \n>> SATA drives (ST31500341AS)...I'm going through a fiasco right now \n>> with these drives and I wish I had purchased more when I did.\n> I don't think you came to the right conclusion for how to avoid this \n> pain in the future though--buying more garbage drives isn't really \n> satisfying.\nNo, the original drives I have work fine. The problem, as you point \nout, is that Seagate changed the firmware and made it so that you cannot \nflash it to a different version.\n\n> What you should realize is to never assemble a production server using \n> newly designed drives.\nI totally agree. I'm using these drives for an off-site backup of a \nbackup. There is no original data on these. I needed the capacity. I \nwas willing to accept the performance/reliability hit considering the $$/TB.\n\nBob\n\n", "msg_date": "Fri, 22 May 2009 08:38:13 -0500", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Fri, 22 May 2009, Robert Schnabel wrote:\n\n> No, the original drives I have work fine. The problem, as you point out, is \n> that Seagate changed the firmware and made it so that you cannot flash it to \n> a different version.\n\nThe subtle point here is that whether a drive has been out long enough to \nhave a stable firmware is very much a component of its overall quality and \nreliability--regardless of whether the drive works fine in any one system \nor not. The odds of you'll get a RAID compability breaking firmware \nchange in the first few months a drive is on the market are painfully \nhigh.\n\nYou don't have to defend that it was the right decision for you, I was \njust uncomfortable with the way you were extrapolating your experience to \nprovide a larger rule of thumb. Allocated hot spares and cold spares on \nthe shelf are both important, but for most people those should be a safety \nnet on top of making the safest hardware choice, rather than as a way to \nallow taking excessive risks in what you buy.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 22 May 2009 11:08:09 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Fri, May 22, 2009 at 9:08 AM, Greg Smith <[email protected]> wrote:\n> On Fri, 22 May 2009, Robert Schnabel wrote:\n>\n>> No, the original drives I have work fine.  The problem, as you point out,\n>> is that Seagate changed the firmware and made it so that you cannot flash it\n>> to a different version.\n>\n> The subtle point here is that whether a drive has been out long enough to\n> have a stable firmware is very much a component of its overall quality and\n> reliability--regardless of whether the drive works fine in any one system or\n> not.  The odds of you'll get a RAID compability breaking firmware change in\n> the first few months a drive is on the market are painfully high.\n\nAlso keep in mind that 1.5 and 2TB drives that are out right now are\nall consumer grade drives, built to be put into a workstation singly\nor maybe in pairs. It's much less common to see such a change in\nserver class drives, because the manufacturers know where they'll be\nused, and also because the server grade drives usually piggy back on\nthe workstation class drives for a lot of their tech and bios, so the\nneed for sudden changes are less common.\n", "msg_date": "Fri, 22 May 2009 09:59:52 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "On Fri, 22 May 2009, Scott Marlowe wrote:\n\n> It's much less common to see such a change in server class drives\n\nThis is a good point, and I just updated \nhttp://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks with a section \nabout this topic (the last one under \"ATA Disks\").\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 22 May 2009 12:40:00 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" }, { "msg_contents": "Greg Smith wrote:\n> On Fri, 22 May 2009, Scott Marlowe wrote:\n>\n>> It's much less common to see such a change in server class drives\n>\n> This is a good point, and I just updated \n> http://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks with a section \n> about this topic (the last one under \"ATA Disks\").\nAnd I can confirm that point because the 1TB SAS drives (ST31000640SS) I \njust received to replace all the 1.5TB drives have the same firmware as \nthe ones I purchased back in October. 60% more $$, 50% less capacity... \nbut they work :-) Lesson learned.\n", "msg_date": "Fri, 22 May 2009 12:25:55 -0500", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 hard disk choice" } ]
[ { "msg_contents": "Hi,\n\nRecent change postgresql server from Amazon EC2 small into large one.\nThat gives me x86_64 arch, two core cpu and 7.5GB ram. Atm got almost\n~2000 small databases at that server and autovacuum working hole time\n(witch isn't good for performance as I notice at cacti, one core is\nbusy in 60% hole time). How can I tweak postgresql.conf to get better\nperformance ? Maybe number of database is huge but most of them are\nunused most of time and others (~400-500) do mainly selects only with\nsmall number of inserts or deletes from time to time.\n\nMy configuration is Fedora Core 10:\npostgresql-libs-8.3.7-1.fc10.x86_64\npostgresql-8.3.7-1.fc10.x86_64\npostgresql-server-8.3.7-1.fc10.x86_64\npostgresql-devel-8.3.7-1.fc10.x86_64\n\npostgresql.conf:\n#v+\nmax_connections = 500\t\t\t\nshared_buffers = 200MB\t\t\t\nwork_mem = 4096\t\t\t\t\nmaintenance_work_mem = 256MB\t\t\nmax_fsm_pages = 204800\t\t\t\nmax_fsm_relations = 4000\t\t\nvacuum_cost_delay = 0\t\t\t\nvacuum_cost_page_hit = 1\t\t\nvacuum_cost_page_miss = 10\t\t\nvacuum_cost_page_dirty = 20\t\t\nvacuum_cost_limit = 200\t\t\neffective_cache_size = 2048MB\nlogging_collector = on\t\t\t\nlog_truncate_on_rotation = on\t\t\nlog_rotation_age = 1d\t\t\t\nlog_rotation_size = 0\t\t\t\ntrack_activities = off\ntrack_counts = on\nlog_parser_stats = off\nlog_planner_stats = off\nlog_executor_stats = off\nlog_statement_stats = off\nautovacuum = on\t\t\t\t\nlog_autovacuum_min_duration = -1\t\nautovacuum_max_workers = 3\t\t\nautovacuum_naptime = 10min\t\t\nautovacuum_vacuum_threshold = 10000\t\nautovacuum_analyze_threshold = 10000\t\nautovacuum_vacuum_scale_factor = 0.5\t\nautovacuum_analyze_scale_factor = 0.4\t\nautovacuum_freeze_max_age = 200000000\t\nautovacuum_vacuum_cost_delay = 20\t\nautovacuum_vacuum_cost_limit = -1\t\n#v-\n\nRegards\n-- \nŁukasz Jagiełło\nSystem Administrator\nG-Forces Web Management Polska sp. z o.o. (www.gforces.pl)\n\nUl. Kruczkowskiego 12, 80-288 Gdańsk\nSpółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ\n", "msg_date": "Sun, 24 May 2009 21:46:38 +0200", "msg_from": "=?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with autovacuum" }, { "msg_contents": "2009/5/24 Łukasz Jagiełło <[email protected]>:\n> Hi,\n>\n> Recent change postgresql server from Amazon EC2 small into large one.\n> That gives me x86_64 arch, two core cpu and 7.5GB ram. Atm got almost\n> ~2000 small databases at that server and autovacuum working hole time\n\n> postgresql.conf:\n> max_fsm_pages = 204800\n> max_fsm_relations = 4000\n\nSo, in 2000 databases, there's only an average of 2 relations per db\nand 102 dead rows? Cause that's all you got room for with those\nsettings.\n\nWhats the last 20 or so lines of vacuum verbose as run by a superuser say?\n", "msg_date": "Mon, 25 May 2009 09:32:20 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "2009/5/25 Scott Marlowe <[email protected]>:\n\n>\n> So, in 2000 databases, there's only an average of 2 relations per db\n> and 102 dead rows?  Cause that's all you got room for with those\n> settings.\n>\n> Whats the last 20 or so lines of vacuum verbose as run by a superuser say?\n\naccording to http://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html\nmax_fsm_relations applies only to tables and indices, and it says \"in\ndatabase\", so I presume that means per database. In which case, those\nsettings are ok.\nIt would be nice, to see if vacuum actually complains about it.\n\n\n\n-- \nGJ\n", "msg_date": "Mon, 25 May 2009 16:50:35 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "2009/5/25 Grzegorz Jaśkiewicz <[email protected]>:\n> 2009/5/25 Scott Marlowe <[email protected]>:\n>\n>>\n>> So, in 2000 databases, there's only an average of 2 relations per db\n>> and 102 dead rows?  Cause that's all you got room for with those\n>> settings.\n>>\n>> Whats the last 20 or so lines of vacuum verbose as run by a superuser say?\n>\n> according to http://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html\n> max_fsm_relations applies only to tables and indices, and it says \"in\n> database\", so I presume that means per database. In which case, those\n> settings are ok.\n> It would be nice, to see if vacuum actually complains about it.\n\nThe docs say: \"These parameters control the size of the shared free\nspace map,\" Key word being shared.\n", "msg_date": "Mon, 25 May 2009 10:15:48 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "W dniu 25 maja 2009 17:32 użytkownik Scott Marlowe\n<[email protected]> napisał:\n>> Recent change postgresql server from Amazon EC2 small into large one.\n>> That gives me x86_64 arch, two core cpu and 7.5GB ram. Atm got almost\n>> ~2000 small databases at that server and autovacuum working hole time\n>\n>> postgresql.conf:\n>> max_fsm_pages = 204800\n>> max_fsm_relations = 4000\n>\n> So, in 2000 databases, there's only an average of 2 relations per db\n> and 102 dead rows?  Cause that's all you got room for with those\n> settings.\n>\n> Whats the last 20 or so lines of vacuum verbose as run by a superuser say?\n\nGuess you was right\n\n#v+\nTotal free space (including removable row versions) is 2408 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n1 pages containing 2092 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_depend_depender_index\" now contains 5267 row versions\nin 30 pages\nSZCZEGÓŁY: 4 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_depend_reference_index\" now contains 5267 row\nversions in 32 pages\nSZCZEGÓŁY: 4 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_depend\": moved 0 row versions, truncated 39 to 39 pages\nSZCZEGÓŁY: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"pg_catalog.pg_depend\"\nINFO: \"pg_depend\": scanned 39 of 39 pages, containing 5267 live rows\nand 0 dead rows; 3000 rows in sample, 5267 estimated total rows\nINFO: free space map contains 3876 pages in 4000 relations\nSZCZEGÓŁY: A total of 67824 page slots are in use (including overhead).\n67824 page slots are required to track all free space.\nCurrent limits are: 204800 page slots, 4000 relations, using 1612 kB.\nNOTICE: max_fsm_relations(4000) equals the number of relations checked\nPODPOWIEDŹ: You have at least 4000 relations. Consider increasing\nthe configuration parameter \"max_fsm_relations\".\nVACUUM\n#v-\n\nChange:\n\nmax_fsm_pages = 6400000\nmax_fsm_relations = 400000\n\n-- \nŁukasz Jagiełło\nSystem Administrator\nG-Forces Web Management Polska sp. z o.o. (www.gforces.pl)\n\nUl. Kruczkowskiego 12, 80-288 Gdańsk\nSpółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ\n", "msg_date": "Mon, 25 May 2009 20:31:38 +0200", "msg_from": "=?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "W dniu 25 maja 2009 17:50 użytkownik Grzegorz Jaśkiewicz\n<[email protected]> napisał:\n>> So, in 2000 databases, there's only an average of 2 relations per db\n>> and 102 dead rows?  Cause that's all you got room for with those\n>> settings.\n>>\n>> Whats the last 20 or so lines of vacuum verbose as run by a superuser say?\n>\n> according to http://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html\n> max_fsm_relations applies only to tables and indices, and it says \"in\n> database\", so I presume that means per database. In which case, those\n> settings are ok.\n> It would be nice, to see if vacuum actually complains about it.\n\nVacuum did complain about that but still got:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 609 postgres 15 0 537m 105m 676 S 21.5 1.4 1:15.00 postgres:\nautovacuum launcher process\n 610 postgres 15 0 241m 88m 500 S 20.5 1.1 1:50.87 postgres:\nstats collector process\n\nThat autovacuum working hole time, shoudn't be run only when db needs ?\n\n-- \nŁukasz Jagiełło\nSystem Administrator\nG-Forces Web Management Polska sp. z o.o. (www.gforces.pl)\n\nUl. Kruczkowskiego 12, 80-288 Gdańsk\nSpółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ\n", "msg_date": "Mon, 25 May 2009 20:43:27 +0200", "msg_from": "=?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "=?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]> writes:\n> That autovacuum working hole time, shoudn't be run only when db needs ?\n\nWith 2000 databases to cycle through, autovac is going to be spending\nquite a lot of time just finding out whether it needs to do anything.\nI believe the interpretation of autovacuum_naptime is that it should\nexamine each database that often, ie once a minute by default. So\nit's got more than 30 databases per second to look through.\n\nMaybe it would make more sense to have one database (or at least,\nmany fewer databases) and 2000 schemas within it?\n\nIf you really want to stick with this layout, you're going to have to\nincrease autovacuum_naptime.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 May 2009 14:55:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum " }, { "msg_contents": "2009/5/25 Tom Lane <[email protected]>:\n> With 2000 databases to cycle through, autovac is going to be spending\n> quite a lot of time just finding out whether it needs to do anything.\n> I believe the interpretation of autovacuum_naptime is that it should\n> examine each database that often, ie once a minute by default.  So\n> it's got more than 30 databases per second to look through.\n>\n> Maybe it would make more sense to have one database (or at least,\n> many fewer databases) and 2000 schemas within it?\n\nIt's rather impossible in my case.\n\n> If you really want to stick with this layout, you're going to have to\n> increase autovacuum_naptime.\n\nThanks for hint.\n\n-- \nŁukasz Jagiełło\nSystem Administrator\nG-Forces Web Management Polska sp. z o.o. (www.gforces.pl)\n\nUl. Kruczkowskiego 12, 80-288 Gdańsk\nSpółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ\n", "msg_date": "Mon, 25 May 2009 21:07:53 +0200", "msg_from": "=?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "2009/5/25 Łukasz Jagiełło <[email protected]>:\n> W dniu 25 maja 2009 17:32 użytkownik Scott Marlowe\n> <[email protected]> napisał:\n>>> Recent change postgresql server from Amazon EC2 small into large one.\n>>> That gives me x86_64 arch, two core cpu and 7.5GB ram. Atm got almost\n>>> ~2000 small databases at that server and autovacuum working hole time\n>>\n>>> postgresql.conf:\n>>> max_fsm_pages = 204800\n>>> max_fsm_relations = 4000\n>>\n>> So, in 2000 databases, there's only an average of 2 relations per db\n>> and 102 dead rows?  Cause that's all you got room for with those\n>> settings.\n>>\n>> Whats the last 20 or so lines of vacuum verbose as run by a superuser say?\n>\n> Guess you was right\n>\n\nFor future reference, if you don't log postgresql's messages, please\nturn at least basic logging on, and things like that you would find in\n$PGDATA/pg_log/ logs. The value suggested by postgresql, is the\nminimum. I usually put in 1.5 the suggestion, which covers me from\nworse case hopefully.\n\n-- \nGJ\n", "msg_date": "Mon, 25 May 2009 22:46:11 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Tom Lane escribi�:\n> =?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]> writes:\n> > That autovacuum working hole time, shoudn't be run only when db needs ?\n> \n> With 2000 databases to cycle through, autovac is going to be spending\n> quite a lot of time just finding out whether it needs to do anything.\n> I believe the interpretation of autovacuum_naptime is that it should\n> examine each database that often, ie once a minute by default. So\n> it's got more than 30 databases per second to look through.\n\nNote that this is correct in 8.1 and 8.2 but not 8.3 onwards.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 26 May 2009 14:19:34 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane escribi�:\n>> I believe the interpretation of autovacuum_naptime is that it should\n>> examine each database that often, ie once a minute by default. So\n>> it's got more than 30 databases per second to look through.\n\n> Note that this is correct in 8.1 and 8.2 but not 8.3 onwards.\n\nOh? The current documentation still defines the variable thusly:\n\n\tSpecifies the minimum delay between autovacuum runs on any given\n\tdatabase. In each round the daemon examines the database and\n\tissues VACUUM and ANALYZE commands as needed for tables in that\n\tdatabase.\n\nI suppose the use of \"minimum\" means that this is not technically\nincorrect, but it's sure not very helpful if there is some other\nrule involved that causes it to not behave as I said. (And if there\nis some other rule, what is that?) Please improve the docs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 2009 14:28:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum " }, { "msg_contents": "W dniu 26 maja 2009 20:28 użytkownik Tom Lane <[email protected]> napisał:\n>>> I believe the interpretation of autovacuum_naptime is that it should\n>>> examine each database that often, ie once a minute by default.  So\n>>> it's got more than 30 databases per second to look through.\n>\n>> Note that this is correct in 8.1 and 8.2 but not 8.3 onwards.\n>\n> Oh?  The current documentation still defines the variable thusly:\n>\n>        Specifies the minimum delay between autovacuum runs on any given\n>        database. In each round the daemon examines the database and\n>        issues VACUUM and ANALYZE commands as needed for tables in that\n>        database.\n>\n> I suppose the use of \"minimum\" means that this is not technically\n> incorrect, but it's sure not very helpful if there is some other\n> rule involved that causes it to not behave as I said.  (And if there\n> is some other rule, what is that?)  Please improve the docs.\n\nAfter change autovacuum_naptime postgresql behave like you wrote before.\n\n-- \nŁukasz Jagiełło\nSystem Administrator\nG-Forces Web Management Polska sp. z o.o. (www.gforces.pl)\n\nUl. Kruczkowskiego 12, 80-288 Gdańsk\nSpółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ\n", "msg_date": "Tue, 26 May 2009 20:36:34 +0200", "msg_from": "=?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Tom Lane escribió:\n> Alvaro Herrera <[email protected]> writes:\n> > Tom Lane escribi�:\n> >> I believe the interpretation of autovacuum_naptime is that it should\n> >> examine each database that often, ie once a minute by default. So\n> >> it's got more than 30 databases per second to look through.\n> \n> > Note that this is correct in 8.1 and 8.2 but not 8.3 onwards.\n> \n> Oh? The current documentation still defines the variable thusly:\n> \n> \tSpecifies the minimum delay between autovacuum runs on any given\n> \tdatabase. In each round the daemon examines the database and\n> \tissues VACUUM and ANALYZE commands as needed for tables in that\n> \tdatabase.\n\nSorry, it's the other way around actually -- correct for 8.3 onwards,\nwrong for 8.1 and 8.2. In the earlier versions, it would do one run in\na chosen database, sleep during \"naptime\", then do another run.\n\n> I suppose the use of \"minimum\" means that this is not technically\n> incorrect, but it's sure not very helpful if there is some other\n> rule involved that causes it to not behave as I said. (And if there\n> is some other rule, what is that?)\n\nThe word \"minimum\" is there because it's possible that all workers are\nbusy with some other database(s).\n\n> Please improve the docs.\n\nI'll see about that.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 26 May 2009 14:41:15 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Sorry, it's the other way around actually -- correct for 8.3 onwards,\n> wrong for 8.1 and 8.2. In the earlier versions, it would do one run in\n> a chosen database, sleep during \"naptime\", then do another run.\n\n> Tom Lane escribió:\n>> I suppose the use of \"minimum\" means that this is not technically\n>> incorrect, but it's sure not very helpful if there is some other\n>> rule involved that causes it to not behave as I said. (And if there\n>> is some other rule, what is that?)\n\n> The word \"minimum\" is there because it's possible that all workers are\n> busy with some other database(s).\n\n>> Please improve the docs.\n\n> I'll see about that.\n\nHmm, maybe we need to improve the code too. This example suggests that\nthere needs to be some limit on the worker launch rate, even if there\nare so many databases that that means we don't meet naptime exactly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 2009 15:01:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum " }, { "msg_contents": "Tom Lane escribi�:\n\n> Hmm, maybe we need to improve the code too. This example suggests that\n> there needs to be some limit on the worker launch rate, even if there\n> are so many databases that that means we don't meet naptime exactly.\n\nWe already have a 100ms lower bound on the sleep time (see\nlauncher_determine_sleep()). Maybe that needs to be increased?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 26 May 2009 19:12:42 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane escribi�:\n>> Hmm, maybe we need to improve the code too. This example suggests that\n>> there needs to be some limit on the worker launch rate, even if there\n>> are so many databases that that means we don't meet naptime exactly.\n\n> We already have a 100ms lower bound on the sleep time (see\n> launcher_determine_sleep()). Maybe that needs to be increased?\n\nMaybe. I hesitate to suggest a GUC variable ;-)\n\nOne thought is that I don't trust the code implementing the minimum\ntoo much:\n\n\t/* 100ms is the smallest time we'll allow the launcher to sleep */\n\tif (nap->tv_sec <= 0 && nap->tv_usec <= 100000)\n\t{\n\t\tnap->tv_sec = 0;\n\t\tnap->tv_usec = 100000;\t/* 100 ms */\n\t}\n\nWhat would happen if tv_sec is negative and tv_usec is say 500000?\nMaybe negative tv_sec is impossible here, but ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 2009 19:27:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum " }, { "msg_contents": "Tom Lane escribió:\n> Alvaro Herrera <[email protected]> writes:\n> > Tom Lane escribi�:\n> >> Hmm, maybe we need to improve the code too. This example suggests that\n> >> there needs to be some limit on the worker launch rate, even if there\n> >> are so many databases that that means we don't meet naptime exactly.\n> \n> > We already have a 100ms lower bound on the sleep time (see\n> > launcher_determine_sleep()). Maybe that needs to be increased?\n> \n> Maybe. I hesitate to suggest a GUC variable ;-)\n\nHeh :-)\n\n> One thought is that I don't trust the code implementing the minimum\n> too much:\n> \n> \t/* 100ms is the smallest time we'll allow the launcher to sleep */\n> \tif (nap->tv_sec <= 0 && nap->tv_usec <= 100000)\n> \t{\n> \t\tnap->tv_sec = 0;\n> \t\tnap->tv_usec = 100000;\t/* 100 ms */\n> \t}\n> \n> What would happen if tv_sec is negative and tv_usec is say 500000?\n> Maybe negative tv_sec is impossible here, but ...\n\nI don't think it's possible to get negative tv_sec here currently, but\nperhaps you're right that we could make this code more future-proof.\n\nHowever I think there's a bigger problem here, which is that if the user\nhas set naptime too low, i.e. to a value lower than\nnumber-of-databases * 100ms, we'll be running the (expensive)\nrebuild_database_list function on each iteration ... maybe we oughta put\na lower bound on naptime based on the number of databases to avoid this\nproblem.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 26 May 2009 19:51:54 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> However I think there's a bigger problem here, which is that if the user\n> has set naptime too low, i.e. to a value lower than\n> number-of-databases * 100ms, we'll be running the (expensive)\n> rebuild_database_list function on each iteration ... maybe we oughta put\n> a lower bound on naptime based on the number of databases to avoid this\n> problem.\n\nBingo, that's surely exactly what was happening to the OP. He had 2000\ndatabases and naptime at (I assume) the default; so he was rerunning\nrebuild_database_list every 100ms.\n\nSo that recovery code path needs some more thought. Maybe a lower bound\non how often to do rebuild_database_list? And/or don't set adl_next_worker\nto less than 100ms in the future to begin with?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 May 2009 19:59:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum " }, { "msg_contents": "2009/5/26 Tom Lane <[email protected]>:\n> Alvaro Herrera <[email protected]> writes:\n>> However I think there's a bigger problem here, which is that if the user\n>> has set naptime too low, i.e. to a value lower than\n>> number-of-databases * 100ms, we'll be running the (expensive)\n>> rebuild_database_list function on each iteration ... maybe we oughta put\n>> a lower bound on naptime based on the number of databases to avoid this\n>> problem.\n>\n> Bingo, that's surely exactly what was happening to the OP.  He had 2000\n> databases and naptime at (I assume) the default; so he was rerunning\n> rebuild_database_list every 100ms.\n>\n> So that recovery code path needs some more thought.  Maybe a lower bound\n> on how often to do rebuild_database_list?  And/or don't set adl_next_worker\n> to less than 100ms in the future to begin with?\n\nI'd be happy with logging telling me when things are getting pathological.\n", "msg_date": "Tue, 26 May 2009 19:57:00 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Tom Lane escribi�:\n\n> Bingo, that's surely exactly what was happening to the OP. He had 2000\n> databases and naptime at (I assume) the default; so he was rerunning\n> rebuild_database_list every 100ms.\n> \n> So that recovery code path needs some more thought. Maybe a lower bound\n> on how often to do rebuild_database_list? And/or don't set adl_next_worker\n> to less than 100ms in the future to begin with?\n\nI've been giving this some thought and tried several approaches. In the\nend the one that I like the most is raising autovacuum_naptime to a\nreasonable value for the exiting number of databases. The only problem\nI have with it is that it's trivial to change it in the autovacuum\nlauncher process and have it stick, but there's no way to propagate the\nvalue out to backends or postmaster to that they SHOW the actual value\nin use by the launcher. The best I can do is emit a WARNING with the\nnew value.\n\nI have experimented with other choices such as not rebuilding the\ndatabase list if the time elapsed since last rebuild is not very long,\nbut there were small problems with that so I'd prefer to avoid it.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.", "msg_date": "Mon, 8 Jun 2009 15:36:42 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I've been giving this some thought and tried several approaches. In the\n> end the one that I like the most is raising autovacuum_naptime to a\n> reasonable value for the exiting number of databases. The only problem\n> I have with it is that it's trivial to change it in the autovacuum\n> launcher process and have it stick, but there's no way to propagate the\n> value out to backends or postmaster to that they SHOW the actual value\n> in use by the launcher. The best I can do is emit a WARNING with the\n> new value.\n\nWell, that code isn't even correct I think; you're not supposed to\nmodify a GUC variable directly. I think you should just silently\nuse a naptime of at least X without changing the nominal GUC variable.\nAnd definitely without the WARNING --- that's nothing but log spam.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Jun 2009 16:16:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum " }, { "msg_contents": "Tom Lane escribi�:\n\n> Well, that code isn't even correct I think; you're not supposed to\n> modify a GUC variable directly. I think you should just silently\n> use a naptime of at least X without changing the nominal GUC variable.\n> And definitely without the WARNING --- that's nothing but log spam.\n\nGlitches fixed in this version; will apply shortly to 8.3 and HEAD.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support", "msg_date": "Tue, 9 Jun 2009 11:48:27 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Alvaro Herrera escribi�:\n> Tom Lane escribi�:\n> \n> > Well, that code isn't even correct I think; you're not supposed to\n> > modify a GUC variable directly. I think you should just silently\n> > use a naptime of at least X without changing the nominal GUC variable.\n> > And definitely without the WARNING --- that's nothing but log spam.\n> \n> Glitches fixed in this version; will apply shortly to 8.3 and HEAD.\n\nCommitted.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 9 Jun 2009 12:42:34 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Glitches fixed in this version; will apply shortly to 8.3 and HEAD.\n\nLooks sane; one trivial grammar correction:\n\n> + /* the minimum allowed time between two awakening of the launcher */\n\nShould read \"two awakenings\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Jun 2009 12:42:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with autovacuum " } ]
[ { "msg_contents": "Hi,\n\nI have sen many posts on using SSDs, and iodrive\n<http://www.fusionio.com>in particular, to accelerate the performance\nof Postgresql (or other DBMS)\n-- e.g. this discussion<http://groups.google.co.il/group/pgsql.performance/browse_thread/thread/1d6d7434246afd97?pli=1>.\nI have also seen the suggestion to use RAM for the same purpose by creating\na tablespace on a RAM mount\npoint.<http://magazine.redhat.com/2007/12/12/tip-from-an-rhce-memory-storage-on-postgresql/>Granted\nthese make most sense when the whole database cannot fit into main\nmemory, or if we want to avoid cold DB response times (i.e waiting for the\nDB to \"warm up\" as stuff gets cached in memory).\n\nMy question is this: if we use either SSD or RAM tablespaces, I would\nimagine postgresql will be oblevient to this and would still cache the\ntablespace elemenst that are on SSD or RAM into memory - right? Is there a\nway to avoid that, i.e. to tell postgress NOT to cache tablespaces, or some\nother granularity of the DB?\n\nThanks,\n\n-- Shaul\n\n*Dr. Shaul Dar*\nEmail: [email protected]\nWeb: www.shauldar.com\n\nHi,I have sen many posts on using SSDs, and iodrive in particular, to accelerate the performance of Postgresql (or other DBMS) -- e.g. this discussion. I have also seen the suggestion to use RAM for the same purpose by creating a tablespace on a RAM mount point. Granted these make most sense when the whole database cannot fit into main memory, or if we want to avoid cold DB response times (i.e waiting for the DB to \"warm up\" as stuff gets cached in memory).\nMy question is this: if we use either SSD or RAM tablespaces, I would imagine postgresql will be oblevient to this and would still cache the tablespace elemenst that are on SSD or RAM into memory - right? Is there a way to avoid that, i.e. to tell postgress NOT to cache tablespaces, or some other granularity of the DB?\nThanks,-- ShaulDr. Shaul Dar\nEmail: [email protected]: www.shauldar.com", "msg_date": "Mon, 25 May 2009 16:51:59 +0300", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Putting tables or indexes in SSD or RAM: avoiding double caching?" }, { "msg_contents": "Well for one thing on the IODrive. Be sure to use a FS that supports direct IO so you don't cache it on the FS level and thus take room an object not on SSD could use. We use vxfs with mincache=direct as our filesystem for just this reason. Also, there is an IO drive tuning manual that discusses the same. It's a good read if you don't already have it. I do not know of a way to partition the PG cache other than make it small and use the FS controls to force direct IO.\n\n2.5 cents..\n\n-kg\n\n\n-----Original Message-----\nFrom: [email protected] on behalf of Shaul Dar\nSent: Mon 5/25/2009 6:51 AM\nTo: [email protected]\nSubject: [PERFORM] Putting tables or indexes in SSD or RAM: avoiding double caching?\n \nHi,\n\nI have sen many posts on using SSDs, and iodrive\n<http://www.fusionio.com>in particular, to accelerate the performance\nof Postgresql (or other DBMS)\n-- e.g. this discussion<http://groups.google.co.il/group/pgsql.performance/browse_thread/thread/1d6d7434246afd97?pli=1>.\nI have also seen the suggestion to use RAM for the same purpose by creating\na tablespace on a RAM mount\npoint.<http://magazine.redhat.com/2007/12/12/tip-from-an-rhce-memory-storage-on-postgresql/>Granted\nthese make most sense when the whole database cannot fit into main\nmemory, or if we want to avoid cold DB response times (i.e waiting for the\nDB to \"warm up\" as stuff gets cached in memory).\n\nMy question is this: if we use either SSD or RAM tablespaces, I would\nimagine postgresql will be oblevient to this and would still cache the\ntablespace elemenst that are on SSD or RAM into memory - right? Is there a\nway to avoid that, i.e. to tell postgress NOT to cache tablespaces, or some\nother granularity of the DB?\n\nThanks,\n\n-- Shaul\n\n*Dr. Shaul Dar*\nEmail: [email protected]\nWeb: www.shauldar.com\n\n\n\n\n\n\nRE: [PERFORM] Putting tables or indexes in SSD or RAM: avoiding double caching?\n\n\n\nWell for one thing on the IODrive.  Be sure to use a FS that supports direct IO so you don't cache it on the FS level and thus take room an object not on SSD could use.  We use vxfs with mincache=direct as our filesystem for just this reason.  Also, there is an IO drive tuning manual that discusses the same.  It's a good read if you don't already have it.  I do not know of a way to partition the PG cache other than make it small and use the FS controls to force direct IO.\n\n2.5 cents..\n\n-kg\n\n\n-----Original Message-----\nFrom: [email protected] on behalf of Shaul Dar\nSent: Mon 5/25/2009 6:51 AM\nTo: [email protected]\nSubject: [PERFORM] Putting tables or indexes in SSD or RAM: avoiding double caching?\n\nHi,\n\nI have sen many posts on using SSDs, and iodrive\n<http://www.fusionio.com>in particular, to accelerate the performance\nof Postgresql (or other DBMS)\n-- e.g. this discussion<http://groups.google.co.il/group/pgsql.performance/browse_thread/thread/1d6d7434246afd97?pli=1>.\nI have also seen the suggestion to use RAM for the same purpose by creating\na tablespace on a RAM mount\npoint.<http://magazine.redhat.com/2007/12/12/tip-from-an-rhce-memory-storage-on-postgresql/>Granted\nthese make most sense when the whole database cannot fit into main\nmemory, or if we want to avoid cold DB response times (i.e waiting for the\nDB to \"warm up\" as stuff gets cached in memory).\n\nMy question is this: if we use either SSD or RAM tablespaces, I would\nimagine postgresql will be oblevient to this and would still cache the\ntablespace elemenst that are on SSD or RAM into memory - right? Is there a\nway to avoid that, i.e. to tell postgress NOT to cache tablespaces, or some\nother granularity of the DB?\n\nThanks,\n\n-- Shaul\n\n*Dr. Shaul Dar*\nEmail: [email protected]\nWeb: www.shauldar.com", "msg_date": "Tue, 26 May 2009 11:49:25 -0400", "msg_from": "\"Kenny Gorman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Putting tables or indexes in SSD or RAM: avoiding double caching?" } ]
[ { "msg_contents": "I keep falling into situations where it would be nice to host a server \nsomewhere else. Virtual host solutions and the mysterious cloud are no \ngood for the ones I run into though, as disk performance is important for \nall the applications I have to deal with.\n\nWhat I'd love to have is a way to rent a fairly serious piece of dedicated \nhardware, ideally with multiple (at least 4) hard drives in a RAID \nconfiguration and a battery-backed write cache. The cache is negotiable. \nLinux would be preferred, FreeBSD or Solaris would also work; not Windows \nthough (see \"good DB performance\").\n\nIs anyone aware of a company that offers such a thing?\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 26 May 2009 17:51:25 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Hosted servers with good DB disk performance?" }, { "msg_contents": "Depends on the level of facility you are looking for. Peer1 (www.peer1.com)\nwill sell you just about whatever you need contained in a single box and I\nbelieve their Atlanta facility and some others have a managed SAN option.\nSince you want a customized solution, make sure you talk with one of their\nsolutions engineers. Another good option in this range up to mid-enterprise\nhosting solutions is Host My Site (www.hostmysite.com). On the very high\nend of the spectrum, gni (www.gni.com) seems to provide a good set of\ninfrastructure as a service (IAAS) solutions including SAN storage and very\nhigh bandwidth - historically they have been very successful in the MPOG\nworld. If you are interested, I can put you in touch with real people who\ncan help you at all three organizations.\n\nJerry Champlin|Absolute Performance Inc.\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Greg Smith\nSent: Tuesday, May 26, 2009 3:51 PM\nTo: [email protected]\nSubject: [PERFORM] Hosted servers with good DB disk performance?\n\nI keep falling into situations where it would be nice to host a server \nsomewhere else. Virtual host solutions and the mysterious cloud are no \ngood for the ones I run into though, as disk performance is important for \nall the applications I have to deal with.\n\nWhat I'd love to have is a way to rent a fairly serious piece of dedicated \nhardware, ideally with multiple (at least 4) hard drives in a RAID \nconfiguration and a battery-backed write cache. The cache is negotiable. \nLinux would be preferred, FreeBSD or Solaris would also work; not Windows \nthough (see \"good DB performance\").\n\nIs anyone aware of a company that offers such a thing?\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n", "msg_date": "Tue, 26 May 2009 16:17:37 -0600", "msg_from": "\"Jerry Champlin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "Greg Smith wrote:\n> What I'd love to have is a way to rent a fairly serious piece of \n> dedicated hardware, ideally with multiple (at least 4) hard drives in a \n> RAID configuration and a battery-backed write cache. The cache is \n> negotiable. Linux would be preferred, FreeBSD or Solaris would also \n> work; not Windows though (see \"good DB performance\").\n\nWe tried this with poor results. Most of the co-location and server-farm places are set up with generic systems that are optimized for small-to-medium-sized web sites. They use MySQL and are surprised to hear there's an alternative open-source DB. They claim to be able to create custom configurations, but it's a lie.\n\nThe problem is that they run on thin profit margins, and their techs are mostly ignorant, they just follow scripts. If something goes wrong, or they make an error, you can't get anything through their thick heads. And you can't go down there and fix it yourself.\n\nFor example, we told them EXACTLY how to set up our system, but they decided that automatic monthly RPM OS updates couldn't hurt. So the first of the month, we in the morning to find that Linux had been updated to libraries that were incompatible with our own software, the system automatically rebooted and our web site was dead. And many similar incidents.\n\nWe finally bought some nice Dell servers and found a co-location site that provides us all the infrastructure (reliable power, internet, cooling, security...), and we're in charge of the computers. We've never looked back.\n\nCraig\n", "msg_date": "Tue, 26 May 2009 15:28:08 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, May 26, 2009 at 11:51 PM, Greg Smith <[email protected]> wrote:\n> I keep falling into situations where it would be nice to host a server\n> somewhere else.  Virtual host solutions and the mysterious cloud are no good\n> for the ones I run into though, as disk performance is important for all the\n> applications I have to deal with.\n\nPerhaps you'll be satisfied with\nhttp://www.ovh.co.uk/products/dedicated_list.xml ? Personally I have\nonly one machine there (SuperPlan Mini) - I asked them to set up\nProxmox (http://pve.proxmox.com/wiki/Main_Page ) for me and now I have\nfour OpenVZ Linux containers with different setup and services. So far\nI can't be more happy.\n\nRegards,\nMarcin\n", "msg_date": "Wed, 27 May 2009 00:50:47 +0200", "msg_from": "=?UTF-8?Q?Marcin_St=C4=99pnicki?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, 2009-05-26 at 17:51 -0400, Greg Smith wrote:\n> I keep falling into situations where it would be nice to host a server \n> somewhere else. Virtual host solutions and the mysterious cloud are no \n> good for the ones I run into though, as disk performance is important for \n> all the applications I have to deal with.\n> \n> What I'd love to have is a way to rent a fairly serious piece of dedicated \n> hardware, ideally with multiple (at least 4) hard drives in a RAID \n> configuration and a battery-backed write cache. The cache is negotiable. \n> Linux would be preferred, FreeBSD or Solaris would also work; not Windows \n> though (see \"good DB performance\").\n> \n> Is anyone aware of a company that offers such a thing?\n\nSure, CMD will do it, so will Rack Space and a host of others. If you\nare willing to go with a VPS SliceHost are decent folk. CMD doesn't rent\nhardware you would have to provide that, Rack Space does.\n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Tue, 26 May 2009 16:00:34 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On 5/26/09, Greg Smith <[email protected]> wrote:\n> I keep falling into situations where it would be nice to host a server\n> somewhere else. Virtual host solutions and the mysterious cloud are no\n> good for the ones I run into though, as disk performance is important for\n> all the applications I have to deal with.\n>\n> What I'd love to have is a way to rent a fairly serious piece of dedicated\n> hardware, ideally with multiple (at least 4) hard drives in a RAID\n> configuration and a battery-backed write cache. The cache is negotiable.\n> Linux would be preferred, FreeBSD or Solaris would also work; not Windows\n> though (see \"good DB performance\").\n>\n> Is anyone aware of a company that offers such a thing?\n\nwww.contegix.com offer just about the best support I've come across\nand are familiar with Postgres. They offer RHEL (and windows) managed\nservers on a variety of boxes. They're not a budget outfit though, but\nthat's reflected in the service.\n\n-- \nDave Page\nEnterpriseDB UK: http://www.enterprisedb.com\n", "msg_date": "Tue, 26 May 2009 19:58:26 -0400", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, 26 May 2009, Joshua D. Drake wrote:\n\n> CMD doesn't rent hardware you would have to provide that, Rack Space \n> does.\n\nPart of the idea was to avoid buying a stack of servers, if this were just \na \"where do I put the boxes at?\" problem I'd have just asked you about it \nalready. I forgot to check Rack Space earlier, looks like they have Dell \nservers with up to 8 drives and a RAID controller in them available. \nLet's just hope it's not one of the completely useless PERC models there; \ncan anyone confirm Dell's PowerEdge R900 has one of the decent performing \nPERC6 controllers I've heard rumors of in it?\n\nCraig, I share your concerns about outsourced hosting, but as the only \ncustom application involved is one I build my own RPMs for I'm not really \nconcerned about the system getting screwed up software-wise. The idea \nhere is that I might rent an eval system to confirm performance is \nreasonable, and if it is then I'd be clear to get a bigger stack of them. \nLuckily there's a guy here who knows a bit about benchmarking for this \nsort of thing...\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 26 May 2009 21:17:05 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "Greg,\n\n> I keep falling into situations where it would be nice to host a server\n> somewhere else. Virtual host solutions and the mysterious cloud are no\n> good for the ones I run into though, as disk performance is important\n> for all the applications I have to deal with.\n\nJoyent will guarentee you a certain amount of disk bandwidth. As far as \nI know, they're the only hoster who does.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Tue, 26 May 2009 18:26:08 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "\nOn 5/26/09 6:17 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> On Tue, 26 May 2009, Joshua D. Drake wrote:\n> \n>> CMD doesn't rent hardware you would have to provide that, Rack Space\n>> does.\n> \n> Part of the idea was to avoid buying a stack of servers, if this were just\n> a \"where do I put the boxes at?\" problem I'd have just asked you about it\n> already. I forgot to check Rack Space earlier, looks like they have Dell\n> servers with up to 8 drives and a RAID controller in them available.\n> Let's just hope it's not one of the completely useless PERC models there;\n> can anyone confirm Dell's PowerEdge R900 has one of the decent performing\n> PERC6 controllers I've heard rumors of in it?\n\nEvery managed hosting provider I've seen uses RAID controllers and support\nthrough the hardware provider. If its Dell its 99% likely a PERC (OEM'd\nLSI).\nHP, theirs (not sure who the OEM is), Sun theirs (OEM'd Adaptec).\n\nPERC6 in my testing was certainly better than PERC5, but its still sub-par\nin sequential transfer rate or scaling up past 6 or so drives in a volume.\n\nI did go through the process of using a managed hosting provider and getting\ncustom RAID card and storage arrays -- but that takes a lot of hand-holding\nand time, and will most certainly cause setup delays and service issues when\nthings go wrong and you've got the black-sheep server. Unless its\nabsolutely business critical to get that last 10%-20% performance, I would\ngo with whatever they have with no customization.\n\nMost likely if you ask for a database setup, they'll give you 6 or 8 drives\nin raid-5. Most of what these guys do is set up LAMP cookie-cutters...\n\n> \n> Craig, I share your concerns about outsourced hosting, but as the only\n> custom application involved is one I build my own RPMs for I'm not really\n> concerned about the system getting screwed up software-wise. The idea\n> here is that I might rent an eval system to confirm performance is\n> reasonable, and if it is then I'd be clear to get a bigger stack of them.\n> Luckily there's a guy here who knows a bit about benchmarking for this\n> sort of thing...\n> \n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 26 May 2009 18:41:21 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, May 26, 2009 at 7:41 PM, Scott Carey <[email protected]> wrote:\n>\n> On 5/26/09 6:17 PM, \"Greg Smith\" <[email protected]> wrote:\n>\n>> On Tue, 26 May 2009, Joshua D. Drake wrote:\n>>\n>>> CMD doesn't rent hardware you would have to provide that, Rack Space\n>>> does.\n>>\n>> Part of the idea was to avoid buying a stack of servers, if this were just\n>> a \"where do I put the boxes at?\" problem I'd have just asked you about it\n>> already.  I forgot to check Rack Space earlier, looks like they have Dell\n>> servers with up to 8 drives and a RAID controller in them available.\n>> Let's just hope it's not one of the completely useless PERC models there;\n>> can anyone confirm Dell's PowerEdge R900 has one of the decent performing\n>> PERC6 controllers I've heard rumors of in it?\n>\n> Every managed hosting provider I've seen uses RAID controllers and support\n> through the hardware provider.  If its Dell its 99% likely a PERC (OEM'd\n> LSI).\n> HP, theirs (not sure who the OEM is), Sun theirs (OEM'd Adaptec).\n>\n> PERC6 in my testing was certainly better than PERC5, but its still sub-par\n> in sequential transfer rate or scaling up past 6 or so drives in a volume.\n>\n> I did go through the process of using a managed hosting provider and getting\n> custom RAID card and storage arrays -- but that takes a lot of hand-holding\n> and time, and will most certainly cause setup delays and service issues when\n> things go wrong and you've got the black-sheep server.  Unless its\n> absolutely business critical to get that last 10%-20% performance, I would\n> go with whatever they have with no customization.\n>\n> Most likely if you ask for a database setup, they'll give you 6 or 8 drives\n> in raid-5.  Most of what these guys do is set up LAMP cookie-cutters...\n>\n>>\n>> Craig, I share your concerns about outsourced hosting, but as the only\n>> custom application involved is one I build my own RPMs for I'm not really\n>> concerned about the system getting screwed up software-wise.  The idea\n>> here is that I might rent an eval system to confirm performance is\n>> reasonable, and if it is then I'd be clear to get a bigger stack of them.\n>> Luckily there's a guy here who knows a bit about benchmarking for this\n>> sort of thing...\n\nYeah, the OP would be much better served ordering a server with an\nAreca or Escalade / 3ware controller setup and ready to go, shipped to\nthe hosting center and sshing in and doing the rest than letting a\nhosted solution company try to compete. You can get a nice 16x15K SAS\ndisk machine with an Areca controller, dual QC cpus, and 16 to 32 gig\nram for $6000 to $8000 ready to go. We've since repurposed our Dell /\nPERC machines as file servers and left the real database server work\nto our aberdeen machines. Trying to wring reasonable performance out\nof most Dell servers is a testament to frustration.\n", "msg_date": "Tue, 26 May 2009 19:52:43 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "\nOn 5/26/09 6:52 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> On Tue, May 26, 2009 at 7:41 PM, Scott Carey <[email protected]> wrote:\n>> \n>> On 5/26/09 6:17 PM, \"Greg Smith\" <[email protected]> wrote:\n>> \n>>> On Tue, 26 May 2009, Joshua D. Drake wrote:\n>>> \n>>>> CMD doesn't rent hardware you would have to provide that, Rack Space\n>>>> does.\n>>> \n>>> Part of the idea was to avoid buying a stack of servers, if this were just\n>>> a \"where do I put the boxes at?\" problem I'd have just asked you about it\n>>> already.  I forgot to check Rack Space earlier, looks like they have Dell\n>>> servers with up to 8 drives and a RAID controller in them available.\n>>> Let's just hope it's not one of the completely useless PERC models there;\n>>> can anyone confirm Dell's PowerEdge R900 has one of the decent performing\n>>> PERC6 controllers I've heard rumors of in it?\n>> \n>> Every managed hosting provider I've seen uses RAID controllers and support\n>> through the hardware provider.  If its Dell its 99% likely a PERC (OEM'd\n>> LSI).\n>> HP, theirs (not sure who the OEM is), Sun theirs (OEM'd Adaptec).\n>> \n>> PERC6 in my testing was certainly better than PERC5, but its still sub-par\n>> in sequential transfer rate or scaling up past 6 or so drives in a volume.\n>> \n>> I did go through the process of using a managed hosting provider and getting\n>> custom RAID card and storage arrays -- but that takes a lot of hand-holding\n>> and time, and will most certainly cause setup delays and service issues when\n>> things go wrong and you've got the black-sheep server.  Unless its\n>> absolutely business critical to get that last 10%-20% performance, I would\n>> go with whatever they have with no customization.\n>> \n>> Most likely if you ask for a database setup, they'll give you 6 or 8 drives\n>> in raid-5.  Most of what these guys do is set up LAMP cookie-cutters...\n>> \n>>> \n>>> Craig, I share your concerns about outsourced hosting, but as the only\n>>> custom application involved is one I build my own RPMs for I'm not really\n>>> concerned about the system getting screwed up software-wise.  The idea\n>>> here is that I might rent an eval system to confirm performance is\n>>> reasonable, and if it is then I'd be clear to get a bigger stack of them.\n>>> Luckily there's a guy here who knows a bit about benchmarking for this\n>>> sort of thing...\n> \n> Yeah, the OP would be much better served ordering a server with an\n> Areca or Escalade / 3ware controller setup and ready to go, shipped to\n> the hosting center and sshing in and doing the rest than letting a\n> hosted solution company try to compete. You can get a nice 16x15K SAS\n> disk machine with an Areca controller, dual QC cpus, and 16 to 32 gig\n> ram for $6000 to $8000 ready to go. We've since repurposed our Dell /\n> PERC machines as file servers and left the real database server work\n> to our aberdeen machines. Trying to wring reasonable performance out\n> of most Dell servers is a testament to frustration.\n> \n\nFor a permanent server, yes. But for a sort lease? You have to go with\nwhat is easily available for lease, or work out something with a provider\nwhere they buy the HW from you and manage/lease it back (some do this, but\nall I've ever heard of involved 12+ servers to do so and sign on for 1 or 2\nyears).\n\nExpecting full I/O performance out of a DELL with a PERC is not really\npossible, but maybe that's not as important as a certain pricing model or\nthe flexibility? That is really an independent business decision.\n\nI'll also but a caveat to the '3ware' above -- the last few I've used were\nslower than the PERC (9650 series versus PERC6, 9550 versus PERC5 -- all\ntests with 12 SATA drives raid 10).\nI have no experience with the 3ware 9690 series (SAS) though -- those might\nbe just fine.\n\n", "msg_date": "Tue, 26 May 2009 19:27:18 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "\n\n\nOn 5/26/09 7:27 PM, \"Scott Carey\" <[email protected]> wrote:\n> \n> For a permanent server, yes. But for a sort lease? You have to go with\n\nAhem ... 'short' not 'sort'. \n\n", "msg_date": "Tue, 26 May 2009 19:31:34 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, May 26, 2009 at 8:27 PM, Scott Carey <[email protected]> wrote:\n\n>> Yeah, the OP would be much better served ordering a server with an\n>> Areca or Escalade / 3ware controller setup and ready to go, shipped to\n>> the hosting center and sshing in and doing the rest than letting a\n>> hosted solution company try to compete.  You can get a nice 16x15K SAS\n>> disk machine with an Areca controller, dual QC cpus, and 16 to 32 gig\n>> ram for $6000 to $8000 ready to go.  We've since repurposed our Dell /\n>> PERC machines as file servers and left the real database server work\n>> to our aberdeen machines.  Trying to wring reasonable performance out\n>> of most Dell servers is a testament to frustration.\n>>\n>\n> For a permanent server, yes.  But for a sort lease?  You have to go with\n> what is easily available for lease, or work out something with a provider\n> where they buy the HW from you and manage/lease it back (some do this, but\n> all I've ever heard of involved 12+ servers to do so and sign on for 1 or 2\n> years).\n\nTrue, but given the low cost of a high drive count machine with spares\netc you can come away spending a lot less than by leasing.\n\n> Expecting full I/O performance out of a DELL with a PERC is not really\n> possible, but maybe that's not as important as a certain pricing model or\n> the flexibility?  That is really an independent business decision.\n\nTrue. Plus if you only need 4 drives or something, you can do pretty\nwell with a Dell with the RAID controller turned to JBOD and letting\nthe linux kernel do the RAID work.\n\n> I'll also but a caveat to the '3ware' above -- the last few I've used were\n> slower than the PERC (9650 series versus PERC6, 9550 versus PERC5  -- all\n> tests with 12 SATA drives raid 10).\n> I have no experience with the 3ware 9690 series (SAS) though -- those might\n> be just fine.\n\nMy experience is primarily with Areca 1100, 1200, and 1600 series\ncontrollers, but others on the list have done well with 3ware\ncontrollers. We have an 8 port 11xx series areca card at work running\nRAID-6 as a multipurpose server, and it's really quite fast and well\nbehaved for sequential throughput. But the 16xx series cards stomp\nthe 11xx series in the ground for random IOPS.\n", "msg_date": "Tue, 26 May 2009 20:35:42 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, 26 May 2009, Scott Marlowe wrote:\n\n> Plus if you only need 4 drives or something, you can do pretty well with \n> a Dell with the RAID controller turned to JBOD and letting the linux \n> kernel do the RAID work.\n\nI think most of the apps I'm considering would be OK with 4 drives and a \nuseful write cache. The usual hosted configurations are only 1 or 2 and \nno usable cache, which really limits what you can do with the server \nbefore you run into a disk bottleneck. My rule of thumb is that any \nsingle core will be satisfied as long as you've got at least 4 disks to \nfeed it, since it's hard for one process to use more than a couple of \nhundred MB/s for doing mostly sequential work. Obviously random access is \nmuch easier to get disk-bound, where you have to throw a lot more disks at \nit.\n\nIt wouldn't surprise me to find it's impossible to get an optimal setup of \n8+ disks from any hosting provider. Wasn't asking for \"great\" DB \nperformance though, just \"good\".\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 26 May 2009 23:23:56 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "Greg Smith wrote:\n> I keep falling into situations where it would be nice to host a server\n> somewhere else. Virtual host solutions and the mysterious cloud are no\n> good for the ones I run into though, as disk performance is important\n> for all the applications I have to deal with.\n\nIt's worth noting that some clouds are foggier than others.\n\nOn Amazon's you can improve your disk performance by setting up\nsoftware RAID over multiple of their virtual drives. And since they\ncharge by GB, it doesn't cost you any more to do this than to set up\na smaller number of larger drives.\n\nHere's a blog showing Bonnie++ comparing various RAID levels\non Amazon's cloud - with a 4 disk RAID0 giving a nice\nperformance increase over a single virtual drive.\nhttp://af-design.com/blog/2009/02/27/amazon-ec2-disk-performance/\n\nHere's a guy who set up a 40TB RAID0 with 40 1TB virtual disks\non Amazon.\nhttp://groups.google.com/group/ec2ubuntu/web/raid-0-on-ec2-ebs-volumes-elastic-block-store-using-mdadm\nhttp://groups.google.com/group/ec2ubuntu/browse_thread/thread/d520ae145edf746\n\nI might get around to trying some pgbench runs on amazon\nin a week or so. Any suggestions what would be most interesting?\n\n\n> What I'd love to have is a way to rent a fairly serious piece of\n> dedicated hardware, ideally with multiple (at least 4) hard drives in a\n> RAID configuration and a battery-backed write cache. The cache is\n> negotiable. Linux would be preferred, FreeBSD or Solaris would also\n> work; not Windows though (see \"good DB performance\").\n> \n> Is anyone aware of a company that offers such a thing?\n> \n\n", "msg_date": "Tue, 26 May 2009 20:58:41 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, May 26, 2009 at 6:28 PM, Craig James <[email protected]> wrote:\n> Greg Smith wrote:\n>>\n>> What I'd love to have is a way to rent a fairly serious piece of dedicated\n>> hardware, ideally with multiple (at least 4) hard drives in a RAID\n>> configuration and a battery-backed write cache.  The cache is negotiable.\n>> Linux would be preferred, FreeBSD or Solaris would also work; not Windows\n>> though (see \"good DB performance\").\n\n> We finally bought some nice Dell servers and found a co-location site that\n> provides us all the infrastructure (reliable power, internet, cooling,\n> security...), and we're in charge of the computers.  We've never looked\n> back.\n\nI ran this way on a Quad-processor Dell for many years, and then,\nafter selling the business and starting a new one, decided to keep my\nDB on a remote-hosted machine. I have a dual-core2 with hardware RAID\n5 (I know, I know) and a private network interface to the other\nservers (web, email, web-cache)\n\nJust today when the DB server went down (after 2 years of reliable\nservice .... and 380 days of uptime) they gave me remote KVM access to\nthe machine. Turns out I had messed up the fstab while fiddling with\nthe server because I really don't know FreeBSD as well as Linux,\n\nI think remote leased-hosting works fine as long as you have a\ncompetent team on the other end and \"KVM over IP\" access. Many\nproviders don't have that... and without it you can get stuck as you\ndescribe.\n\nI have used MANY providers over they years, at the peak with over 30\nleased servers at 12 providers, and with many colocation situations as\nwell. The only advantage with colocation I have seen .... is the\nreduced expense if you keep it going for a few years on the same\nbox..... which is a big advantage if it lets you buy a much more\npowerful box to begin with.\n\nProviders I prefer for high-end machines allow me to upgrade the\nhardware with no monthly fees (marked-up cost of upgrade + time/labor\nonly).... that keeps the cost down.\n", "msg_date": "Wed, 27 May 2009 00:09:36 -0400", "msg_from": "Erik Aronesty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "Greg Smith wrote:\n> What I'd love to have is a way to rent a fairly serious piece of \n> dedicated hardware, ideally with multiple (at least 4) hard drives in \n> a RAID configuration and a battery-backed write cache. The cache is \n> negotiable. Linux would be preferred, FreeBSD or Solaris would also \n> work; not Windows though (see \"good DB performance\").\n>\n> Is anyone aware of a company that offers such a thing?\nI've used http://softlayer.com/ in the past and highly recommend them. \nThey sell a wide range of dedicated servers, including ones that handle \nup to 12 HDDs/SSDs, and servers with battery-backed RAID controllers \n(I've been told they use mostly Adaptec cards as well as some 3ware \ncards). In addition, all their servers are connected to a private \nnetwork you can VPN into, and all include IPMI for remote management \nwhen you can't SSH into your server. They have a host of other \nfeatures; click on the Services tab on their site to find out more.\n\nAlex\n\n", "msg_date": "Wed, 27 May 2009 00:12:48 -0500", "msg_from": "Alex Adriaanse <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "Hi,\n\nGreg Smith <[email protected]> writes:\n\n> I keep falling into situations where it would be nice to host a server\n> somewhere else. Virtual host solutions and the mysterious cloud are no good\n> for the ones I run into though, as disk performance is important for all the\n> applications I have to deal with.\n\nA french company here is working on several points of interest for you,\nI'd say. They provide dedicated server renting and are working on their\nown OpenSource cloud solution, so there's nothing mysterious about it,\nand you can even run the software in your own datacenter(s).\n http://lost-oasis.fr/\n http://www.niftyname.org/\n\nOK, granted, the company's french and their site too, but the OpenSource\ncloud solution is in english and the code available in public git\nrepositories (and tarballs).\n\n> What I'd love to have is a way to rent a fairly serious piece of dedicated\n> hardware, ideally with multiple (at least 4) hard drives in a RAID\n> configuration and a battery-backed write cache. The cache is\n> negotiable. Linux would be preferred, FreeBSD or Solaris would also work;\n> not Windows though (see \"good DB performance\").\n>\n> Is anyone aware of a company that offers such a thing?\n\nDid you omit to say \"english spoken\" as a requirement? :)\n-- \ndim\n", "msg_date": "Wed, 27 May 2009 11:07:27 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, 2009-05-26 at 19:52 -0600, Scott Marlowe wrote:\n> On Tue, May 26, 2009 at 7:41 PM, Scott Carey <[email protected]> wrote:\n> >\n> > On 5/26/09 6:17 PM, \"Greg Smith\" <[email protected]> wrote:\n> >\n> >> On Tue, 26 May 2009, Joshua D. Drake wrote:\n> >>\n> >>> CMD doesn't rent hardware you would have to provide that, Rack Space\n> >>> does.\n> >>\n> >> Part of the idea was to avoid buying a stack of servers, if this were just\n> >> a \"where do I put the boxes at?\" problem I'd have just asked you about it\n> >> already.\n\nHeh. Well on another consideration any \"rental\" will out live its cost\neffectiveness in 6 months or less. At least if you own the box, its\nuseful for a long period of time.\n\nHeck I got a quad opteron, 2 gig of memory with 2 6402 HP controllers\nand 2 fully loaded MSA30s for 3k. Used of course but still.\n\nThe equivalent machine brand new is 10k and the same machine from Rack\nSpace is going to be well over 1200.00 a month.\n\n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Wed, 27 May 2009 07:25:59 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "\n> Heh. Well on another consideration any \"rental\" will out live its cost\n> effectiveness in 6 months or less. At least if you own the box, its\n> useful for a long period of time.\n>\n> Heck I got a quad opteron, 2 gig of memory with 2 6402 HP controllers\n> and 2 fully loaded MSA30s for 3k. Used of course but still.\n>\n> The equivalent machine brand new is 10k and the same machine from Rack\n> Space is going to be well over 1200.00 a month.\n> \nPresumably true, but owing the gear means: 1) buying the gear; 2) buying \nbackup hardware if you need a \"shell\" or replacement gear to be handy so \nif something bad happens you can get back running quickly; 3) a data \ncenter rack to hold the server; 4) bandwidth; 5) monitoring of the \nhardware and having a response team available to fix it. \n\nThe virtual private server market is interesting, but we've found \nvarious flaws that are make our transition away from owning our own gear \nand data center problematic: 1) they may not offer reverse DNS (PTR \nrecords) for your IP which is generally needed if your application sends \nout email alerts of any kind; 2) they may have nasty termination clauses \n(allowing them to terminate server at any time for any reason without \nnotice and without giving you access to your code and data stored on the \nVPS); and 3) performance will always lag as its virtualized and the \nservers may be \"over subscribed.\"\n\nI like the Amazon EC2 solution, though the pricing is overly complex and \nthey suffer the \"no DNS PTR\" ability. But since you can buy just what \nyou need, you can run warm standby servers or the like and moving your \ndata from one to the other over the private network costs nothing \nextra. I found their choice of OS confusing (we wanted CentOS, but they \nhave no Amazon-certified versions), too.\n\nDoes anybody have any recommendations for a good VPS provider?\n\nThanks,\nDavid\n", "msg_date": "Wed, 27 May 2009 08:42:06 -0700", "msg_from": "David Wall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "On Tue, May 26, 2009 at 7:58 PM, Dave Page <[email protected]> wrote:\n\n> On 5/26/09, Greg Smith <[email protected]> wrote:\n> > I keep falling into situations where it would be nice to host a server\n> > somewhere else. Virtual host solutions and the mysterious cloud are no\n> > good for the ones I run into though, as disk performance is important for\n> > all the applications I have to deal with.\n> >\n> > What I'd love to have is a way to rent a fairly serious piece of\n> dedicated\n> > hardware, ideally with multiple (at least 4) hard drives in a RAID\n> > configuration and a battery-backed write cache. The cache is negotiable.\n> > Linux would be preferred, FreeBSD or Solaris would also work; not Windows\n> > though (see \"good DB performance\").\n> >\n> > Is anyone aware of a company that offers such a thing?\n>\n> www.contegix.com offer just about the best support I've come across\n> and are familiar with Postgres. They offer RHEL (and windows) managed\n> servers on a variety of boxes. They're not a budget outfit though, but\n> that's reflected in the service.\n\n\n +1\n\n These guys have the servers AND they have the knowledge to really back it\nup. If you're looking for co-lo, or complete hands-off management, they're\nyour guys (at a price).\n\n--Scott\n\nOn Tue, May 26, 2009 at 7:58 PM, Dave Page <[email protected]> wrote:\nOn 5/26/09, Greg Smith <[email protected]> wrote:\n> I keep falling into situations where it would be nice to host a server\n> somewhere else.  Virtual host solutions and the mysterious cloud are no\n> good for the ones I run into though, as disk performance is important for\n> all the applications I have to deal with.\n>\n> What I'd love to have is a way to rent a fairly serious piece of dedicated\n> hardware, ideally with multiple (at least 4) hard drives in a RAID\n> configuration and a battery-backed write cache.  The cache is negotiable.\n> Linux would be preferred, FreeBSD or Solaris would also work; not Windows\n> though (see \"good DB performance\").\n>\n> Is anyone aware of a company that offers such a thing?\n\nwww.contegix.com offer just about the best support I've come across\nand are familiar with Postgres. They offer RHEL (and windows) managed\nservers on a variety of boxes. They're not a budget outfit though, but\nthat's reflected in the service. +1  These guys have the servers AND they have the knowledge to really back it up.  If you're looking for co-lo, or complete hands-off management, they're your guys (at a price).\n--Scott", "msg_date": "Wed, 27 May 2009 12:02:47 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "Hi,\n\nQuoting \"Greg Smith\" <[email protected]>:\n> What I'd love to have is a way to rent a fairly serious piece of \n> dedicated hardware\n\nI'm just stumbling over newservers.com, they provide sort of a \"cloud\" \nwith an API but that manages real servers (well, blade ones, but not \nvirtualized). Their \"fast\" variant comes with up to two SAS drives, \nhowever, I don't think there's a BBC. Hardware seems to come from \nDell, charging by hourly usage... but go read their website yourself.\n\nIf anybody has ever tried their systems, I'd like to hear back. I wish \nsuch an offering would exist for Europe (guess that's just a matter of \ntime).\n\nRegards\n\nMarkus Wanner\n\n\n\n", "msg_date": "Wed, 10 Jun 2009 13:23:10 +0200", "msg_from": "\"Markus Wanner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" }, { "msg_contents": "\"Markus Wanner\" <[email protected]> writes:\n> If anybody has ever tried their systems, I'd like to hear back. I wish such\n> an offering would exist for Europe (guess that's just a matter of time).\n\n http://www.niftyname.org/\n http://lost-oasis.fr/\n\nIt seems to be coming very soon, in France :)\n-- \ndim\n", "msg_date": "Thu, 11 Jun 2009 14:14:57 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hosted servers with good DB disk performance?" } ]
[ { "msg_contents": "Dear Friends,\n How to insert or update a file in a table using the query in postgres\nCREATE TABLE excel_file_upload\n(\n user_id integer,\n excel_file bytea\n}\n\nexample \ninsert into excel_file_upload values(1,file1)\n\nfile1 can be any file *.doc,*.xls\n How i can do this(with out using java or any other language) using query?\nPlease help me out\n\nRegards,\nRam\n\n\n\n\n\n\nDear Friends,\n    How to insert or update a file in a table using the \nquery in postgresCREATE TABLE excel_file_upload(  user_id \ninteger,  excel_file bytea\n}\n \nexample \ninsert into excel_file_upload values(1,file1)\n \nfile1 can be any file *.doc,*.xls\n    How i can do this(with out using java or any other \nlanguage) using query?\nPlease help me out\n \nRegards,\nRam", "msg_date": "Wed, 27 May 2009 12:11:30 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "Dear Friends,\n How to insert or update a file in a table using the query in postgres\nCREATE TABLE excel_file_upload\n(\n user_id integer,\n excel_file bytea\n}\n\nexample \ninsert into excel_file_upload values(1,file1)\n\nfile1 can be any file *.doc,*.xls\n How i can do this(with out using java or any other language) using query?\nPlease help me out\n\nRegards,\nRam\n\n\n\n\n\n\n\nDear Friends,\n    How to insert or update a file in a table using the \nquery in postgresCREATE TABLE excel_file_upload(  user_id \ninteger,  excel_file bytea\n}\n \nexample \ninsert into excel_file_upload values(1,file1)\n \nfile1 can be any file *.doc,*.xls\n    How i can do this(with out using java or any other \nlanguage) using query?\nPlease help me out\n \nRegards,\nRam", "msg_date": "Wed, 27 May 2009 12:12:07 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bytea updation" }, { "msg_contents": "ramasubramanian wrote:\n> How to insert or update a file in a table using the query in postgres\n> CREATE TABLE excel_file_upload\n> (\n> user_id integer,\n> excel_file bytea\n> }\n> \n> example \n> insert into excel_file_upload values(1,file1)\n> \n> file1 can be any file *.doc,*.xls\n> How i can do this(with out using java or any other language) using query?\n\nWhy did you post this to the performance list?\n\nYou can use the command line interface psql and\nuse a large object:\n\n/* import the file as a large object with a unique commentary */\n\\lo_import 'dontdo.bmp' 'Excel File'\n\n/* read the large object as bytea and insert it */\nINSERT INTO excel_file_upload (user_id, excel_file) \n VALUES (1, \n pg_catalog.loread(\n pg_catalog.lo_open(\n (SELECT DISTINCT l.loid\n FROM pg_catalog.pg_largeobject l\n JOIN pg_catalog.pg_description d\n ON (l.loid = d.objoid)\n WHERE d.description = 'Excel File'),\n 262144\n ), \n 1000000000\n )\n );\n\n/* delete the large object */\nSELECT pg_catalog.lo_unlink(\n (SELECT DISTINCT l.loid\n FROM pg_catalog.pg_largeobject l\n JOIN pg_catalog.pg_description d\n ON (l.loid = d.objoid)\n WHERE d.description = 'Excel File')\n );\n\nIt would be easier in Java or any other language ...\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 27 May 2009 13:23:14 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bytea updation" }, { "msg_contents": "ramasubramanian, 27.05.2009 08:42:\n> How to insert or update a file in a table using the query in postgres\n> CREATE TABLE excel_file_upload\n> (\n> user_id integer,\n> excel_file bytea\n> }\n> \n> example\n> insert into excel_file_upload values(1,file1)\n> \n> file1 can be any file *.doc,*.xls\n> How i can do this(with out using java or any other language) \n> using query?\n\nIf you are restricted to psql then I gues the only way is the solution show by Albe - but I guess that only works if the file is on the server.\n\nSome of the GUI SQL tools can handle blob \"uploads\" from the client. So if you are willing to switch to a different SQL client, this could be done without programming. \n\nMy own tool available at http://www.sql-workbench.net can either do that through a GUI dialog or as part of an extended SQL syntax that is of course specific to that application.\n\nThomas\n\n", "msg_date": "Wed, 27 May 2009 13:43:10 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "L" } ]
[ { "msg_contents": "So Google hasn't been helpful and I'm not entirely sure what to look\nfor in the mailing lists to find the answer to my problem, so here\ngoes.\n\nI have a query and I have run\nexplain analyze\nselect count(*)\nfrom score\nwhere leaderboardid=35 and score <= 6841 and active\n\nThe result is\n\"Aggregate (cost=2491.06..2491.07 rows=1 width=0) (actual\ntime=38.878..38.878 rows=1 loops=1)\"\n\" -> Seq Scan on score (cost=0.00..2391.17 rows=39954 width=0)\n(actual time=0.012..30.760 rows=38571 loops=1)\"\n\" Filter: (active AND (score <= 6841) AND (leaderboardid = 35))\"\n\"Total runtime: 38.937 ms\"\n\nI have an index on score, I have an index on score and leaderboard and\nactive. I can't seem to figure out how to create an index that will\nturn that \"Seq Scan\" into an index scan. The biggest problem is that\nthe query degrades very quickly with a lot more rows and I will be\ngetting A LOT MORE rows. What can I do to improve the performance of\nthis query?\n\n\n\n\n\nThank you so much,\nZC\n", "msg_date": "Wed, 27 May 2009 07:09:48 -0500", "msg_from": "Zach Calvert <[email protected]>", "msg_from_op": true, "msg_subject": "Improve Query" }, { "msg_contents": "try creating index on all three columns.\nBtw, 38ms is pretty fast. If you run that query very often, do prepare\nit, cos I reckon it takes few ms to actually create plan for it.\n", "msg_date": "Wed, 27 May 2009 16:25:12 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Query" }, { "msg_contents": "The plan ought to be different when there are more scores and the table is\nanalyzed and your statistics target is high enough. At this point you don't\nhave enough data to merit doing anything but a seq scan. The overhead is\nsimply not worth it.\n\nYou could try inserting a lot more rows. I'd create a function to do\nseveral million inserts with random numbers and then analyze and rerun.\n\nI think in the end your probably going to see a couple of bitmap index\nscans, anding the results together, and then a bitmap scan.\n\nKeep in mind that postgres stores statistics information about each column,\nbut doesn't at this point store statistics about two columns together.\n\nPreparing the query might actually hurt performance because postgres treats\na prepare as \"plan this query but I'm not going to tell you value of the\nparameters\". If you actually let the query replan every time then you will\nget a different plan for the leaderboards or score ranges that are more\npopular.\n\nOn Wed, May 27, 2009 at 8:09 AM, Zach Calvert <\[email protected]> wrote:\n\n> So Google hasn't been helpful and I'm not entirely sure what to look\n> for in the mailing lists to find the answer to my problem, so here\n> goes.\n>\n> I have a query and I have run\n> explain analyze\n> select count(*)\n> from score\n> where leaderboardid=35 and score <= 6841 and active\n>\n> The result is\n> \"Aggregate (cost=2491.06..2491.07 rows=1 width=0) (actual\n> time=38.878..38.878 rows=1 loops=1)\"\n> \" -> Seq Scan on score (cost=0.00..2391.17 rows=39954 width=0)\n> (actual time=0.012..30.760 rows=38571 loops=1)\"\n> \" Filter: (active AND (score <= 6841) AND (leaderboardid = 35))\"\n> \"Total runtime: 38.937 ms\"\n>\n> I have an index on score, I have an index on score and leaderboard and\n> active. I can't seem to figure out how to create an index that will\n> turn that \"Seq Scan\" into an index scan. The biggest problem is that\n> the query degrades very quickly with a lot more rows and I will be\n> getting A LOT MORE rows. What can I do to improve the performance of\n> this query?\n>\n>\n>\n>\n>\n> Thank you so much,\n> ZC\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThe plan ought to be different when there are more scores and the table is analyzed and your statistics target is high enough.  At this point you don't have enough data to merit doing anything but a seq scan.  The overhead is simply not worth it.\nYou could try inserting a lot more rows.  I'd create a function to do several million inserts with random numbers and then analyze and rerun.I think in the end your probably going to see a couple of bitmap index scans, anding the results together, and then a bitmap scan.\nKeep in mind that postgres stores statistics information about each column, but doesn't at this point store statistics about two columns together.Preparing the query might actually hurt performance because postgres treats a prepare as \"plan this query but I'm not going to tell you value of the parameters\".  If you actually let the query replan every time then you will get a different plan for the leaderboards or score ranges that are more popular.\nOn Wed, May 27, 2009 at 8:09 AM, Zach Calvert <[email protected]> wrote:\nSo Google hasn't been helpful and I'm not entirely sure what  to look\nfor in the mailing lists to find the answer to my problem, so here\ngoes.\n\nI have a query and I have run\nexplain analyze\nselect count(*)\nfrom score\nwhere leaderboardid=35 and score <= 6841 and active\n\nThe result is\n\"Aggregate  (cost=2491.06..2491.07 rows=1 width=0) (actual\ntime=38.878..38.878 rows=1 loops=1)\"\n\"  ->  Seq Scan on score  (cost=0.00..2391.17 rows=39954 width=0)\n(actual time=0.012..30.760 rows=38571 loops=1)\"\n\"        Filter: (active AND (score <= 6841) AND (leaderboardid = 35))\"\n\"Total runtime: 38.937 ms\"\n\nI have an index on score, I have an index on score and leaderboard and\nactive.  I can't seem to figure out how to create an index that will\nturn that \"Seq Scan\" into an index scan. The biggest problem is that\nthe query degrades very quickly with a lot more rows and I will be\ngetting A LOT MORE rows.  What can I do to improve the performance of\nthis query?\n\n\n\n\n\nThank you so much,\nZC\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 27 May 2009 12:06:38 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Query" }, { "msg_contents": "I'm running the inserts now via a JDBC call I have, which is then\nfollowed up by the query I'm showing and a few others. I have run\ntests on all of the others, and all others run index scans and are\nvery fast, 10 ms or less. This one started at 2 milliseconds when the\ntable is empty and is up to 40 milliseconds with 40K inserts. It is\ndegrading fast and I can't imagine what will happen with 400K, let\nalone 400 million.\n\nIt is getting slower at a fairly fast clip and I need it to remain\nfast. Does postgre just not do count(*) with index scans? Is that my\nproblem?\n\nI'm still running the exact same query. Here are the indexes I have tried\nCREATE INDEX idx_score_score\n ON score\n USING btree\n (score);\n\nCREATE INDEX idx_score_ldbscore\n ON score\n USING btree\n (leaderboardid, score);\n\nCREATE INDEX idx_score_ldbactive\n ON score\n USING btree\n (leaderboardid, active);\n\n\nCREATE INDEX idx_score_ldbactivescore\n ON score\n USING btree\n (leaderboardid, active, score);\n\nCREATE INDEX idx_score_scoreactiveldb\n ON score\n USING btree\n (score, active, leaderboardid);\n\nYet still I run\nexplain analyze\nselect count(*)\nfrom score\nwhere leaderboardid=35 and active and score <= 6841\n\nand get\n\"Aggregate (cost=2641.29..2641.30 rows=1 width=0) (actual\ntime=134.826..134.826 rows=1 loops=1)\"\n\" -> Seq Scan on score (cost=0.00..2536.44 rows=41938 width=0)\n(actual time=0.011..126.250 rows=40918 loops=1)\"\n\" Filter: (active AND (score <= 6841) AND (leaderboardid = 35))\"\n\"Total runtime: 48.891 ms\"\n\n\n\n\nOn Wed, May 27, 2009 at 11:06 AM, Nikolas Everett <[email protected]> wrote:\n> The plan ought to be different when there are more scores and the table is\n> analyzed and your statistics target is high enough.  At this point you don't\n> have enough data to merit doing anything but a seq scan.  The overhead is\n> simply not worth it.\n>\n> You could try inserting a lot more rows.  I'd create a function to do\n> several million inserts with random numbers and then analyze and rerun.\n>\n> I think in the end your probably going to see a couple of bitmap index\n> scans, anding the results together, and then a bitmap scan.\n>\n> Keep in mind that postgres stores statistics information about each column,\n> but doesn't at this point store statistics about two columns together.\n>\n> Preparing the query might actually hurt performance because postgres treats\n> a prepare as \"plan this query but I'm not going to tell you value of the\n> parameters\".  If you actually let the query replan every time then you will\n> get a different plan for the leaderboards or score ranges that are more\n> popular.\n>\n> On Wed, May 27, 2009 at 8:09 AM, Zach Calvert\n> <[email protected]> wrote:\n>>\n>> So Google hasn't been helpful and I'm not entirely sure what  to look\n>> for in the mailing lists to find the answer to my problem, so here\n>> goes.\n>>\n>> I have a query and I have run\n>> explain analyze\n>> select count(*)\n>> from score\n>> where leaderboardid=35 and score <= 6841 and active\n>>\n>> The result is\n>> \"Aggregate  (cost=2491.06..2491.07 rows=1 width=0) (actual\n>> time=38.878..38.878 rows=1 loops=1)\"\n>> \"  ->  Seq Scan on score  (cost=0.00..2391.17 rows=39954 width=0)\n>> (actual time=0.012..30.760 rows=38571 loops=1)\"\n>> \"        Filter: (active AND (score <= 6841) AND (leaderboardid = 35))\"\n>> \"Total runtime: 38.937 ms\"\n>>\n>> I have an index on score, I have an index on score and leaderboard and\n>> active.  I can't seem to figure out how to create an index that will\n>> turn that \"Seq Scan\" into an index scan. The biggest problem is that\n>> the query degrades very quickly with a lot more rows and I will be\n>> getting A LOT MORE rows.  What can I do to improve the performance of\n>> this query?\n>>\n>>\n>>\n>>\n>>\n>> Thank you so much,\n>> ZC\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n", "msg_date": "Wed, 27 May 2009 11:25:18 -0500", "msg_from": "Zach Calvert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve Query" }, { "msg_contents": "you have to vacuum analyze after you've created index, afaik.\nNo, count(*) is still counting rows.\n", "msg_date": "Wed, 27 May 2009 17:32:50 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Query" }, { "msg_contents": "Still getting a seq scan after doing vacuum analyze. Any other ideas?\n\n2009/5/27 Grzegorz Jaśkiewicz <[email protected]>:\n> you have to vacuum analyze after you've created index, afaik.\n> No, count(*) is still counting rows.\n>\n", "msg_date": "Wed, 27 May 2009 11:37:06 -0500", "msg_from": "Zach Calvert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve Query" }, { "msg_contents": "Zach Calvert wrote:\n> Still getting a seq scan after doing vacuum analyze. Any other ideas?\n\nTry CLUSTERing the table on the (leaderboardid, active, score) index.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 27 May 2009 20:10:20 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Query" } ]
[ { "msg_contents": "Hey folks,\n\nDuring Greg Smith's lecture last week I could have sworn I saw on the\nscreen at some point a really long command line for bonnie++ - with\nall the switches he uses.\n\nBut checking his slides I don't see this.\n\nAm I mis-remembering?\n\nCan someone recommend the best way to run it? What combination of switches?\n\nthanks,\n-Alan\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n", "msg_date": "Wed, 27 May 2009 09:24:51 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "running bonnie++" }, { "msg_contents": "On Wed, 27 May 2009, Alan McKay wrote:\n\n> During Greg Smith's lecture last week I could have sworn I saw on the\n> screen at some point a really long command line for bonnie++ - with\n> all the switches he uses.\n\nYou're probably thinking of the one I showed for sysbench, showing how to \nuse it to run a true random seeks/second with client load and size as \ninputs. I never run anything complicated with bonnie++ because the main \nthings I'd like to vary aren't there in the stable version anyway. I've \nfound its main value is to give an easy to replicate test result, and \nadding more switches moves away from that. If you want to tweak the \nvalues extensively, you probably should start climbing the learning curve \nfor iozone or fio instead.\n\nThat said, the latest unstable bonnie++ finally includes some concurrency \nfeatures that make tweaking it more useful, Josh Berkus's \"Whack-a-mole\" \ntutorial showed some example while I just mentioned it in passing. I'm \nwaiting until the 2.0 version comes out before I start switching my \nexamples over to it, again because of vendor repeatibility concerns.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 27 May 2009 12:01:06 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: running bonnie++" } ]
[ { "msg_contents": "\nYou should be able to get a good idea of the options from \"man bonnie++\". I've always just used the defaults with bonnie++\n\nAlso, you'll find Gregs older notes are here\n\nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n\n--- On Wed, 27/5/09, Alan McKay <[email protected]> wrote:\n\n> From: Alan McKay <[email protected]>\n> Subject: [PERFORM] running bonnie++\n> To: [email protected]\n> Date: Wednesday, 27 May, 2009, 2:24 PM\n> Hey folks,\n> \n> During Greg Smith's lecture last week I could have sworn I\n> saw on the\n> screen at some point a really long command line for\n> bonnie++ - with\n> all the switches he uses.\n> \n> But checking his slides I don't see this.\n> \n> Am I mis-remembering?\n> \n> Can someone recommend the best way to run it?  What\n> combination of switches?\n> \n> thanks,\n> -Alan\n> \n> -- \n> “Mother Nature doesn’t do bailouts.”\n>          - Glenn Prickett\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Wed, 27 May 2009 14:20:11 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: running bonnie++" } ]
[ { "msg_contents": "Hey folks,\n\nI have done some googling and found a few things on the matter. But\nam looking for some suggestions from the experts out there.\n\nGot any good pointers for reading material to help me get up to speed\non PostgreSQL clustering? What options are available? What are the\nissues? Terminology. I'm pretty new to the whole data-warehouse\nthing. And once I do all the reading, I'll even be open to product\nrecommendations :-)\n\nAnd in particular since I already have heard of this particular\nproduct - are there any opinions on Continuent?\n\nthanks,\n-Alan\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n", "msg_date": "Wed, 27 May 2009 13:57:08 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Clustering" }, { "msg_contents": "Alan McKay wrote on 27.05.2009 19:57:\n > What options are available?\n\nI guess a good starting point is the Postgres Wiki:\n\nhttp://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling\n\n", "msg_date": "Wed, 27 May 2009 20:25:40 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Clustering" }, { "msg_contents": "On Wed, May 27, 2009 at 1:57 PM, Alan McKay <[email protected]> wrote:\n\n> Hey folks,\n>\n> I have done some googling and found a few things on the matter. But\n> am looking for some suggestions from the experts out there.\n>\n> Got any good pointers for reading material to help me get up to speed\n> on PostgreSQL clustering? What options are available? What are the\n> issues? Terminology. I'm pretty new to the whole data-warehouse\n> thing. And once I do all the reading, I'll even be open to product\n> recommendations :-)\n>\n> And in particular since I already have heard of this particular\n> product - are there any opinions on Continuent?\n>\n\n What's your specific use case? Different types of clustering behave\ndifferently depending on what you're trying to do.\n\n If you're looking to parallelize large BI type queries something like\nGridSQL or PGPool may make sense. If you're looking for more of an OLTP\nsolution, or multi-master replication, pgCluster will make more sense.\n\n --Scott\n\nOn Wed, May 27, 2009 at 1:57 PM, Alan McKay <[email protected]> wrote:\nHey folks,\n\nI have done some googling and found a few things on the matter.  But\nam looking for some suggestions from the experts out there.\n\nGot any good pointers for reading material to help me get up to speed\non PostgreSQL clustering?   What options are available?  What are the\nissues?  Terminology.  I'm pretty new to the whole data-warehouse\nthing.   And once I do all the reading, I'll even be open to product\nrecommendations :-)\n\nAnd in particular since I already have heard of this particular\nproduct - are there any opinions on Continuent?   What's your specific use case?  Different types of clustering behave differently depending on what you're trying to do.\n    If you're looking to parallelize large BI type queries something like GridSQL or PGPool may make sense.  If you're looking for more of an OLTP solution, or multi-master replication, pgCluster will make more sense.\n --Scott", "msg_date": "Wed, 27 May 2009 14:39:02 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres Clustering" }, { "msg_contents": "Alan,\n\nhere I'm implementing something similar to the Chord protocol [1] on the\napplication level to partition my data across 6 PostgreSQL servers with N+1\nreplication. Two up sides on this approch:\n\n1 - When one server is down the load is spread between all the other ones,\ninstead of going only to the replication pair.\n2 - Adding one more server is easy, you only have to reallocate aprox. 1/n\nof your data (n=number of servers).\n\nGood luck there!\n\nBest,\nDaniel\n\nOn Wed, May 27, 2009 at 2:57 PM, Alan McKay <[email protected]> wrote:\n\n> Hey folks,\n>\n> I have done some googling and found a few things on the matter. But\n> am looking for some suggestions from the experts out there.\n>\n> Got any good pointers for reading material to help me get up to speed\n> on PostgreSQL clustering? What options are available? What are the\n> issues? Terminology. I'm pretty new to the whole data-warehouse\n> thing. And once I do all the reading, I'll even be open to product\n> recommendations :-)\n>\n> And in particular since I already have heard of this particular\n> product - are there any opinions on Continuent?\n>\n> thanks,\n> -Alan\n>\n> --\n> “Mother Nature doesn’t do bailouts.”\n> - Glenn Prickett\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAlan,here I'm implementing something similar to the Chord protocol [1] on the application level to partition my data across 6 PostgreSQL servers with N+1 replication. Two up sides on this approch:1 - When one server is down the load is spread between all the other ones, instead of going only to the replication pair.\n2 - Adding one more server is easy, you only have to reallocate aprox. 1/n of your data (n=number of servers).Good luck there!Best,DanielOn Wed, May 27, 2009 at 2:57 PM, Alan McKay <[email protected]> wrote:\nHey folks,\n\nI have done some googling and found a few things on the matter.  But\nam looking for some suggestions from the experts out there.\n\nGot any good pointers for reading material to help me get up to speed\non PostgreSQL clustering?   What options are available?  What are the\nissues?  Terminology.  I'm pretty new to the whole data-warehouse\nthing.   And once I do all the reading, I'll even be open to product\nrecommendations :-)\n\nAnd in particular since I already have heard of this particular\nproduct - are there any opinions on Continuent?\n\nthanks,\n-Alan\n\n--\n“Mother Nature doesn’t do bailouts.”\n         - Glenn Prickett\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 27 May 2009 15:43:04 -0300", "msg_from": "Daniel van Ham Colchete <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres Clustering" }, { "msg_contents": "Try Cybercluster....\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]] En nombre de Alan McKay\nEnviado el: miércoles, 27 de mayo de 2009 13:57\nPara: [email protected]; [email protected]\nAsunto: [PERFORM] Postgres Clustering\n\nHey folks,\n\nI have done some googling and found a few things on the matter. But\nam looking for some suggestions from the experts out there.\n\nGot any good pointers for reading material to help me get up to speed\non PostgreSQL clustering? What options are available? What are the\nissues? Terminology. I'm pretty new to the whole data-warehouse\nthing. And once I do all the reading, I'll even be open to product\nrecommendations :-)\n\nAnd in particular since I already have heard of this particular\nproduct - are there any opinions on Continuent?\n\nthanks,\n-Alan\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 27 May 2009 14:55:51 -0400", "msg_from": "=?iso-8859-1?Q?Eddy_Ernesto_Ba=F1os_Fern=E1ndez?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres Clustering" }, { "msg_contents": "On Wed, 27 May 2009, Alan McKay wrote:\n\n> Got any good pointers for reading material to help me get up to speed\n> on PostgreSQL clustering? What options are available? What are the\n> issues? Terminology.\n\nhttp://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling \nis where I keep my notes, which has worked out great for me because then \nother people fix mistakes and keep everything current.\n\n> And in particular since I already have heard of this particular\n> product - are there any opinions on Continuent?\n\nIf what you need is fairly high-level replication where you can handle \nsome log replication overhead, one of their replicators might work well \nfor you. I usually work more with OLTP systems, where write volume is too \nhigh for something running that far outside of the database to perform \nwell enough. You might note that Continuent's solutions advertise \n\"Scaling of read-intensive applications\" rather than write ones.\n\nBy the way: cross-posting on these lists is generally frowned upon. It \ncauses problems for people who reply to you but are aren't on all of the \nlists you sent to. If you're not sure what list something should go on, \njust send it to -general rather than cc'ing multiple ones.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 27 May 2009 15:01:52 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Clustering" }, { "msg_contents": "On Wednesday 27 May 2009 12:55:51 Eddy Ernesto Baños Fernández wrote:\n> Try Cybercluster....\n>\n> -----Mensaje original-----\n> De: [email protected]\n> [mailto:[email protected]] En nombre de Alan McKay\n> Enviado el: miércoles, 27 de mayo de 2009 13:57\n> Para: [email protected]; [email protected]\n> Asunto: [PERFORM] Postgres Clustering\n>\n> Hey folks,\n>\n> I have done some googling and found a few things on the matter. But\n> am looking for some suggestions from the experts out there.\n>\n> Got any good pointers for reading material to help me get up to speed\n> on PostgreSQL clustering? What options are available? What are the\n> issues? Terminology. I'm pretty new to the whole data-warehouse\n> thing. And once I do all the reading, I'll even be open to product\n> recommendations :-)\n>\n> And in particular since I already have heard of this particular\n> product - are there any opinions on Continuent?\n\nContinuent works (AFAIK) like pgpool clustering, it sends the same statements \nto both/all servers in the cluster but it has no insight to the servers beyond \nthis, so if via a direct connection server A becomes out of sync with server B \nthen continuent is oblivious.\n\n\nOther tools to look at:\n- EnterpriseDB's GridSQL\n- SLONY\n- Command Prompt's PG Replicator\n\n\n>\n> thanks,\n> -Alan\n>\n> --\n> “Mother Nature doesn’t do bailouts.”\n> - Glenn Prickett\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nOn Wednesday 27 May 2009 12:55:51 Eddy Ernesto Baños Fernández wrote:\n> Try Cybercluster....\n>\n> -----Mensaje original-----\n> De: [email protected]\n> [mailto:[email protected]] En nombre de Alan McKay\n> Enviado el: miércoles, 27 de mayo de 2009 13:57\n> Para: [email protected]; [email protected]\n> Asunto: [PERFORM] Postgres Clustering\n>\n> Hey folks,\n>\n> I have done some googling and found a few things on the matter. But\n> am looking for some suggestions from the experts out there.\n>\n> Got any good pointers for reading material to help me get up to speed\n> on PostgreSQL clustering? What options are available? What are the\n> issues? Terminology. I'm pretty new to the whole data-warehouse\n> thing. And once I do all the reading, I'll even be open to product\n> recommendations :-)\n>\n> And in particular since I already have heard of this particular\n> product - are there any opinions on Continuent?\nContinuent works (AFAIK) like pgpool clustering, it sends the same statements to both/all servers in the cluster but it has no insight to the servers beyond this, so if via a direct connection server A becomes out of sync with server B then continuent is oblivious.\nOther tools to look at:\n- EnterpriseDB's GridSQL\n- SLONY\n- Command Prompt's PG Replicator\n>\n> thanks,\n> -Alan\n>\n> --\n> “Mother Nature doesn’t do bailouts.”\n> - Glenn Prickett\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 27 May 2009 13:08:56 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres Clustering" }, { "msg_contents": "> By the way:  cross-posting on these lists is generally frowned upon. It\n> causes problems for people who reply to you but are aren't on all of the\n> lists you sent to.  If you're not sure what list something should go on,\n> just send it to -general rather than cc'ing multiple ones.\n\nDuly noted!\n\nThanks for all the input so far folks.\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n", "msg_date": "Wed, 27 May 2009 15:13:28 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Clustering" }, { "msg_contents": "> Continuent works (AFAIK) like pgpool clustering, it sends the same\n> statements to both/all servers in the cluster but it has no insight to the\n> servers beyond this, so if via a direct connection server A becomes out of\n> sync with server B then continuent is oblivious.\n\nSo can the same be said for pgpool then?\n\n\nthanks,\n-Alan\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n", "msg_date": "Wed, 27 May 2009 15:33:55 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Postgres Clustering" }, { "msg_contents": "On Wednesday 27 May 2009 13:33:55 Alan McKay wrote:\n> > Continuent works (AFAIK) like pgpool clustering, it sends the same\n> > statements to both/all servers in the cluster but it has no insight to\n> > the servers beyond this, so if via a direct connection server A becomes\n> > out of sync with server B then continuent is oblivious.\n>\n> So can the same be said for pgpool then?\n\nYes\n\n\n>\n>\n> thanks,\n> -Alan\n>\n> --\n> “Mother Nature doesn’t do bailouts.”\n> - Glenn Prickett\n\n\nOn Wednesday 27 May 2009 13:33:55 Alan McKay wrote:\n> > Continuent works (AFAIK) like pgpool clustering, it sends the same\n> > statements to both/all servers in the cluster but it has no insight to\n> > the servers beyond this, so if via a direct connection server A becomes\n> > out of sync with server B then continuent is oblivious.\n>\n> So can the same be said for pgpool then?\nYes\n>\n>\n> thanks,\n> -Alan\n>\n> --\n> “Mother Nature doesn’t do bailouts.”\n> - Glenn Prickett", "msg_date": "Wed, 27 May 2009 13:41:40 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres Clustering" }, { "msg_contents": "Hi,\n\nLe 27 mai 09 � 19:57, Alan McKay a �crit :\n> I have done some googling and found a few things on the matter. But\n> am looking for some suggestions from the experts out there.\n>\n> Got any good pointers for reading material to help me get up to speed\n> on PostgreSQL clustering? What options are available? What are the\n> issues? Terminology. I'm pretty new to the whole data-warehouse\n> thing. And once I do all the reading, I'll even be open to product\n> recommendations :-)\n\nDepending on your exact needs, which the terminology you're using only \nallow to guess about, you might enjoy this reading:\n http://wiki.postgresql.org/wiki/Image:Moskva_DB_Tools.v3.pdf\n\n-- \ndim", "msg_date": "Wed, 27 May 2009 21:54:24 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres Clustering" }, { "msg_contents": "> Depending on your exact needs, which the terminology you're using only allow\n> to guess about, you might enjoy this reading:\n>  http://wiki.postgresql.org/wiki/Image:Moskva_DB_Tools.v3.pdf\n\nThanks. To be honest I don't even know myself what my needs are yet.\nI've only been on the job here for a month now. And one thing I\nlearned at PGCon last week is that I have a lot of benchmarching work\nto do before I can figure out what my needs are!\n\nAt this point I'm just trying to read as much as I can on the general\ntopic of \"clustering\". What I am most interested in is load\nbalancing to be able to scale up when required, and of course to be\nable to determine ahead of time when that might be :-)\n\ncheers,\n-Alan\n\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n", "msg_date": "Wed, 27 May 2009 15:58:00 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Postgres Clustering" }, { "msg_contents": "We didn't have much luck with Continuent. They had to make multiple\ncode level changes to get their product to work correctly with our app\non PG 8.2. We never did get it successfully implemented. At this point\nI'm stuck with WAL shipping as I can't find anything that fits my\nconstraints.\n\nThanks,\n \nScot Kreienkamp\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Alan McKay\nSent: Wednesday, May 27, 2009 1:57 PM\nTo: [email protected]; [email protected]\nSubject: [GENERAL] Postgres Clustering\n\nHey folks,\n\nI have done some googling and found a few things on the matter. But\nam looking for some suggestions from the experts out there.\n\nGot any good pointers for reading material to help me get up to speed\non PostgreSQL clustering? What options are available? What are the\nissues? Terminology. I'm pretty new to the whole data-warehouse\nthing. And once I do all the reading, I'll even be open to product\nrecommendations :-)\n\nAnd in particular since I already have heard of this particular\nproduct - are there any opinions on Continuent?\n\nthanks,\n-Alan\n\n-- \n\"Mother Nature doesn't do bailouts.\"\n - Glenn Prickett\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Thu, 28 May 2009 09:34:01 -0400", "msg_from": "\"Scot Kreienkamp\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Clustering" }, { "msg_contents": "2009/5/28 Eddy Ernesto Baños Fernández <[email protected]>:\n> Try Cybercluster....\n\nI looked into that. There is one piece of documentation that is less\nthan ten pages long. There is no users group, no listserve, no\ncommunity that I can discern.\n\nDo you have experience with it and if so could you please share.\n\nThanks.\n", "msg_date": "Wed, 8 Jul 2009 14:26:56 +1200", "msg_from": "Tim Uckun <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres Clustering" } ]
[ { "msg_contents": "Hi,\n\nI need to store data about sensor readings. There is a known (but\nconfigurable) number of sensors which can send update data at any time.\nThe \"current\" state needs to be kept but also all historical records.\nI'm trying to decide between these two designs:\n\n1) create a table for \"current\" data, one record for each sensor, update\nthis table when a sensor reading arrives, create a trigger that would\ntransfer old record data to a history table (of basically the same\nstructure)\n2) write only to the history table, use relatively complex queries or\noutside-the-database magic to determine what the \"current\" values of the\nsensors are.\n\nThe volume of sensor data is potentially huge, on the order of 500,000\nupdates per hour. Sensor data is few numeric(15,5) numbers.\n\nI think the second design would be easiest on the database but as the\ncurrent sensor state can potentially be queried often, it might be too\nslow to read.\n\nAny recommendations?", "msg_date": "Thu, 28 May 2009 14:54:27 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Storing sensor data" }, { "msg_contents": "Ivan Voras wrote:\n> I need to store data about sensor readings. There is a known (but\n> configurable) number of sensors which can send update data at any time.\n> The \"current\" state needs to be kept but also all historical records.\n> I'm trying to decide between these two designs:\n> \n> 1) create a table for \"current\" data, one record for each sensor, update\n> this table when a sensor reading arrives, create a trigger that would\n> transfer old record data to a history table (of basically the same\n> structure)\n> 2) write only to the history table, use relatively complex queries or\n> outside-the-database magic to determine what the \"current\" values of the\n> sensors are.\n\n3) write only to the history table, but have an INSERT trigger to update \nthe table with \"current\" data. This has the same performance \ncharacteristics as 1, but let's you design your application like 2.\n\nI think I'd choose this approach (or 2), since it can handle \nout-of-order or delayed arrival of sensor readings gracefully (assuming \nthey are timestamped at source).\n\nIf you go with 2, I'd recommend to still create a view to encapsulate \nthe complex query for the current values, to make the application \ndevelopment simpler. And if it gets slow, you can easily swap the view \nwith a table, updated with triggers or periodically, without changing \nthe application.\n\n> The volume of sensor data is potentially huge, on the order of 500,000\n> updates per hour. Sensor data is few numeric(15,5) numbers.\n\nWhichever design you choose, you should also consider partitioning the data.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 28 May 2009 16:31:17 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "Option 1 is about somewhere between 2 and 3 times more work for the database\nthan option 2.\n\nDo you need every sensor update to hit the database? In a situation like\nthis I'd be tempted to keep the current values in the application itself and\nthen sweep them all into the database periodically. If some of the sensor\nupdates should hit the database faster, you could push those in as you get\nthem rather than wait for your sweeper. This setup has the advantage that\nyou c0an scale up the number of sensors and the frequency the sensors report\nwithout having to scale up the disks. You can also do the sweeping all in\none transaction or even in one batch update.\n\n\nOn Thu, May 28, 2009 at 9:31 AM, Heikki Linnakangas <\[email protected]> wrote:\n\n> Ivan Voras wrote:\n>\n>> The volume of sensor data is potentially huge, on the order of 500,000\n>> updates per hour. Sensor data is few numeric(15,5) numbers.\n>>\n>\n> Whichever design you choose, you should also consider partitioning the\n> data.\n>\n\n\nAmen. Do that.\n\nOption 1 is about somewhere between 2 and 3 times more work for the database than option 2.\n\nDo you need every sensor update to hit the database?  In a situation\nlike this I'd be tempted to keep the current values in the application\nitself and then sweep them all into the database periodically.  If some\nof the sensor updates should hit the database faster, you could push\nthose in as you get them rather than wait for your sweeper.  This setup\nhas the advantage that you c0an scale up the number of sensors and the\nfrequency the sensors report without having to scale up the disks.  You can also do the sweeping all in one transaction or even in one batch update.On Thu, May 28, 2009 at 9:31 AM, Heikki Linnakangas <[email protected]> wrote:\nIvan Voras wrote:\n\nThe volume of sensor data is potentially huge, on the order of 500,000\nupdates per hour. Sensor data is few numeric(15,5) numbers.\n\n\nWhichever design you choose, you should also consider partitioning the data.\nAmen.  Do that.", "msg_date": "Thu, 28 May 2009 09:38:41 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "On Thu, May 28, 2009 at 2:54 PM, Ivan Voras <[email protected]> wrote:\n> The volume of sensor data is potentially huge, on the order of 500,000\n> updates per hour. Sensor data is few numeric(15,5) numbers.\n\nThe size of that dataset, combined with the apparent simplicity of\nyour schema and the apparent requirement for most-sequential access\n(I'm guessing about the latter two), all lead me to suspect you would\nbe happier with something other than a traditional relational\ndatabase.\n\nI don't know how exact your historical data has to be. Could you get\nby with something like RRDTool? RRdTool is a round-robin database that\nstores multiple levels of historical values aggregated by function. So\nyou typically create an \"average\" database, a \"max\" database and so\non, with the appropriate functions to transform the data, and you\nsubdivide these into day, month, year and so on, by the granularity of\nyour choice.\n\nWhen you store a value, the historical data is aggregated\nappropriately -- at appropriate levels of granularity, so the current\nday database is more precise than the monthly one, and so on -- and\nyou always have access to the exact current data. RRDTool is used by\nsoftware such as Munin and Cacti that track a huge number of readings\nover time for graphing.\n\nIf you require precise data with the ability to filter, aggregate and\ncorrelate over multiple dimensions, something like Hadoop -- or one of\nthe Hadoop-based column database implementations, such as HBase or\nHypertable -- might be a better option, combined with MapReduce/Pig to\nexecute analysis jobs\n\nA.\n", "msg_date": "Thu, 28 May 2009 15:39:53 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "2009/5/28 Heikki Linnakangas <[email protected]>:\n> Ivan Voras wrote:\n>>\n>> I need to store data about sensor readings. There is a known (but\n>> configurable) number of sensors which can send update data at any time.\n>> The \"current\" state needs to be kept but also all historical records.\n>> I'm trying to decide between these two designs:\n>>\n>> 1) create a table for \"current\" data, one record for each sensor, update\n>> this table when a sensor reading arrives, create a trigger that would\n>> transfer old record data to a history table (of basically the same\n>> structure)\n>> 2) write only to the history table, use relatively complex queries or\n>> outside-the-database magic to determine what the \"current\" values of the\n>> sensors are.\n>\n> 3) write only to the history table, but have an INSERT trigger to update the\n> table with \"current\" data. This has the same performance characteristics as\n> 1, but let's you design your application like 2.\n\nExcellent idea!\n\n> I think I'd choose this approach (or 2), since it can handle out-of-order or\n> delayed arrival of sensor readings gracefully (assuming they are timestamped\n> at source).\n\nIt seems like your approach is currently the winner.\n\n> If you go with 2, I'd recommend to still create a view to encapsulate the\n> complex query for the current values, to make the application development\n> simpler. And if it gets slow, you can easily swap the view with a table,\n> updated with triggers or periodically, without changing the application.\n>\n>> The volume of sensor data is potentially huge, on the order of 500,000\n>> updates per hour. Sensor data is few numeric(15,5) numbers.\n>\n> Whichever design you choose, you should also consider partitioning the data.\n\nI'll look into it, but we'll first see if we can get away with\nlimiting the time the data needs to be available.\n", "msg_date": "Thu, 28 May 2009 16:55:34 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "2009/5/28 Nikolas Everett <[email protected]>:\n> Option 1 is about somewhere between 2 and 3 times more work for the database\n> than option 2.\n\nYes, for writes.\n\n> Do you need every sensor update to hit the database?  In a situation like\n\nWe can't miss an update - they can be delayed but they all need to be recorded.\n\n> this I'd be tempted to keep the current values in the application itself and\n> then sweep them all into the database periodically.  If some of the sensor\n> updates should hit the database faster, you could push those in as you get\n> them rather than wait for your sweeper.  This setup has the advantage that\n> you can scale up the number of sensors and the frequency the sensors report\n> without having to scale up the disks.  You can also do the sweeping all in\n> one transaction or even in one batch update.\n\nIt would be nice, but then we need to invest more effort in making the\nfront-end buffering resilient.\n", "msg_date": "Thu, 28 May 2009 16:58:12 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "On Thu, May 28, 2009 at 04:55:34PM +0200, Ivan Voras wrote:\n> 2009/5/28 Heikki Linnakangas <[email protected]>:\n> > Ivan Voras wrote:\n> >>\n> >> I need to store data about sensor readings. There is a known (but\n> >> configurable) number of sensors which can send update data at any time.\n> >> The \"current\" state needs to be kept but also all historical records.\n> >> I'm trying to decide between these two designs:\n> >>\n> >> 1) create a table for \"current\" data, one record for each sensor, update\n> >> this table when a sensor reading arrives, create a trigger that would\n> >> transfer old record data to a history table (of basically the same\n> >> structure)\n> >> 2) write only to the history table, use relatively complex queries or\n> >> outside-the-database magic to determine what the \"current\" values of the\n> >> sensors are.\n> >\n> > 3) write only to the history table, but have an INSERT trigger to update the\n> > table with \"current\" data. This has the same performance characteristics as\n> > 1, but let's you design your application like 2.\n> \n> Excellent idea!\n> \n> > I think I'd choose this approach (or 2), since it can handle out-of-order or\n> > delayed arrival of sensor readings gracefully (assuming they are timestamped\n> > at source).\n> \n> It seems like your approach is currently the winner.\n> \n> > If you go with 2, I'd recommend to still create a view to encapsulate the\n> > complex query for the current values, to make the application development\n> > simpler. And if it gets slow, you can easily swap the view with a table,\n> > updated with triggers or periodically, without changing the application.\n> >\n> >> The volume of sensor data is potentially huge, on the order of 500,000\n> >> updates per hour. Sensor data is few numeric(15,5) numbers.\n> >\n> > Whichever design you choose, you should also consider partitioning the data.\n> \n> I'll look into it, but we'll first see if we can get away with\n> limiting the time the data needs to be available.\n> \n\nMr. Voras,\n\nOne big benefit of partitioning is that you can prune old data with\nminimal impact to the running system. Doing a large bulk delete would\nbe extremely I/O impacting without partion support. We use this for\na DB log system and it allows us to simply truncate a day table instead\nof a delete -- much, much faster.\n\nRegards,\nKen\n", "msg_date": "Thu, 28 May 2009 10:01:02 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "2009/5/28 Alexander Staubo <[email protected]>:\n> On Thu, May 28, 2009 at 2:54 PM, Ivan Voras <[email protected]> wrote:\n>> The volume of sensor data is potentially huge, on the order of 500,000\n>> updates per hour. Sensor data is few numeric(15,5) numbers.\n>\n> The size of that dataset, combined with the apparent simplicity of\n> your schema and the apparent requirement for most-sequential access\n> (I'm guessing about the latter two),\n\nYour guesses are correct, except every now and then a random value\nindexed on a timestamp needs to be retrieved.\n\n> all lead me to suspect you would\n> be happier with something other than a traditional relational\n> database.\n>\n> I don't know how exact your historical data has to be. Could you get\n\nNo \"lossy\" compression is allowed. Exact data is needed for the whole dataset-\n\n> If you require precise data with the ability to filter, aggregate and\n> correlate over multiple dimensions, something like Hadoop -- or one of\n> the Hadoop-based column database implementations, such as HBase or\n> Hypertable -- might be a better option, combined with MapReduce/Pig to\n> execute analysis jobs\n\nThis looks like an interesting idea to investigate. Do you have more\nexperience with such databases? How do they fare with the following\nrequirements:\n\n* Storing large datasets (do they pack data well in the database? No\nwasted space like in e.g. hash tables?)\n* Retrieving specific random records based on a timestamp or record ID?\n* Storing \"inifinite\" datasets (i.e. whose size is not known in\nadvance - cf. e.g. hash tables)\n\nOn the other hand, we could periodically transfer data from PostgreSQL\ninto a simpler database (e.g. BDB) for archival purposes (at the\nexpense of more code). Would they be better suited?\n", "msg_date": "Thu, 28 May 2009 17:06:31 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "2009/5/28 Kenneth Marshall <[email protected]>:\n\n>\n> One big benefit of partitioning is that you can prune old data with\n> minimal impact to the running system. Doing a large bulk delete would\n> be extremely I/O impacting without partion support. We use this for\n> a DB log system and it allows us to simply truncate a day table instead\n> of a delete -- much, much faster.\n\nThanks. I'll need to investigate how much administrative overhead and\nfragility partitioning will introduce since the data will also be\nreplicated between 2 servers (I'm thinking of using Slony). Any\nexperience with this combination?\n", "msg_date": "Thu, 28 May 2009 17:24:33 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "On Thu, May 28, 2009 at 5:06 PM, Ivan Voras <[email protected]> wrote:\n>> If you require precise data with the ability to filter, aggregate and\n>> correlate over multiple dimensions, something like Hadoop -- or one of\n>> the Hadoop-based column database implementations, such as HBase or\n>> Hypertable -- might be a better option, combined with MapReduce/Pig to\n>> execute analysis jobs\n>\n> This looks like an interesting idea to investigate. Do you have more\n> experience with such databases? How do they fare with the following\n> requirements:\n\nWe might want to take this discussion off-list, since this list is\nabout PostgreSQL. Feel free to reply privately.\n\n> * Storing large datasets (do they pack data well in the database? No\n> wasted space like in e.g. hash tables?)\n\nColumns databases like Hypertable and HBase are designed to store data\nquite efficiently. Each column is grouped in a unit called a column\nfamily and stored together in chunks usually called SSTables, after\nthe Google Bigtable paper. (When you design your database you must\ndetermine which columns are usually accessed together, in other to\navoid incurring the I/O cost of loading non-pertinent columns.) Each\nSSTable is like a partition. When storing a chunk to disk, the column\nis compressed, each column being stored sequentially for optimal\ncompression.\n\nI have used HBase, but I don't have any feel for how much space it\nwastes. In theory, though, space usage should be more optimal than\nwith PostgreSQL. I have used Cassandra, another column database I\nwould also recommend, which is very efficient. In many ways I prefer\nCassandra to HBase -- it's leaner, completely decentralized (no single\npoint of failure) and independent of the rather huge, monolithic\nHadoop project -- but it does not currently support MapReduce. If you\nwant to implement some kind of distributed analysis system, you will\nneed to write yourself.\n\nAll three column stores support mapping information by a time\ndimension. Each time you write a key, you also provide a timestamp. In\ntheory you can retain the entire history of a single key. HBase lets\nyou specify how many revisions to retain; not sure what Cassandra\ndoes. However, Cassandra introduces the notion of a \"supercolumn\nfamily\", another grouping level which lets you use the timestamp as a\ncolumn key. To explain how this works, consider the following inserts:\n\n # insert(table_name, key, column, value, timestamp)\n db.insert(\"readings\", \"temperature_sensor\", \"value:1\", 23, \"200905281725023\")\n db.insert(\"readings\", \"temperature_sensor\", \"value:2\", 27, \"200905281725023\")\n db.insert(\"readings\", \"temperature_sensor\", \"value:3\", 21, \"200905281725023\")\n\nThe resulting \"temperature_sensor\" row will have three column values:\n\n value:1 value:2 value:3\n 23 27 21\n\nYou can keep adding values and the row will get bigger. Because\ncolumns are dynamic, only that row will grow; all other rows will stay\nthe same size. Cassandra users usually use the column name as a kind\nof value -- image it's like subindexing an array.\n\nAs you can see, I also passed a timestamp (the 2009.. bit), which is\nused for versioning. Since anyone can write to any node in a cluster,\nCassandra needs to be able to resolve conflicts.\n\nNote that these databases are inherently distributed. You can run them\non a single node just fine -- and that might be appropriate in your\ncase -- but they really shine when you run a whole cluster. Cassandra\nis multi-master, so you can just boot up a number of nodes and read\nfrom/write to any of them.\n\n> * Retrieving specific random records based on a timestamp or record ID?\n\nAbsolutely.\n\n> * Storing \"inifinite\" datasets (i.e. whose size is not known in\n> advance - cf. e.g. hash tables)\n\nThis is one area where column databases are better than relational\nones. The schema is completely dynamic, and you can treat it as a hash\ntable.\n\n> On the other hand, we could periodically transfer data from PostgreSQL\n> into a simpler database (e.g. BDB) for archival purposes (at the\n> expense of more code). Would they be better suited?\n\nConsidering the size and sequential nature of the data, I think they\nwould be better match than a simple key-value store like BDB.\n\nA.\n", "msg_date": "Thu, 28 May 2009 17:34:18 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "depends on how soon do you need to access that data after it's being\ncreated, the way I do it in my systems, I get data from 8 points, bit\nless than you - but I dump it to csv, and import it on database host\n(separate server).\nnow, you could go to BDB or whatever, but that's not the solution.\n\nSo, I would try dumping it to a file, and have separate process, maybe\nseparete server that would import it, store it, as database. Whatever\nyou do, as guys said - partition.\n", "msg_date": "Thu, 28 May 2009 16:39:55 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "On Thu, May 28, 2009 at 05:24:33PM +0200, Ivan Voras wrote:\n> 2009/5/28 Kenneth Marshall <[email protected]>:\n> \n> >\n> > One big benefit of partitioning is that you can prune old data with\n> > minimal impact to the running system. Doing a large bulk delete would\n> > be extremely I/O impacting without partion support. We use this for\n> > a DB log system and it allows us to simply truncate a day table instead\n> > of a delete -- much, much faster.\n> \n> Thanks. I'll need to investigate how much administrative overhead and\n> fragility partitioning will introduce since the data will also be\n> replicated between 2 servers (I'm thinking of using Slony). Any\n> experience with this combination?\n> \n\nWe use Slony1 on a number of databases, but none yet on which we\nuse data partitioning.\n\nCheers,\nKen\n", "msg_date": "Thu, 28 May 2009 12:12:13 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "I currently have a database doing something very similar. I setup partition\ntables with predictable names based on the the data's timestamp week number\neg: (Data_YYYY_WI).\n\nI have a tigger on the parent partition table to redirect data to the\ncorrect partition( tablename:='Data_' || to_char('$NEW(ts)'::timestamptz,\n'IYYY_IW') ) . then I use dynamic sql to do the insert. I did some\noptimization by writting it in pl/TCL and using global variables to store\nprepared insert statements.\n\nMost queries for me are based on the date and we have decent performance\nwith our current setup. For last/current sensor data we just store the last\ndataID in the sensor record. I haven't thought of a better way yet. After\nbatch inserts we caculate the last reading for each participating sensorID\ninserted.\n\nWith partition tables we struggled with the query to get the lastest data :\nselect * from \"Data\" where \"sensorID\"=x order by ts limit 1 -- for parition\ntables. See (\nhttp://archives.postgresql.org/pgsql-performance/2008-11/msg00284.php)\n\n\n\n\n\nOn Thu, May 28, 2009 at 7:55 AM, Ivan Voras <[email protected]> wrote:\n\n> 2009/5/28 Heikki Linnakangas <[email protected]>:\n> > Ivan Voras wrote:\n> >>\n> >> I need to store data about sensor readings. There is a known (but\n> >> configurable) number of sensors which can send update data at any time.\n> >> The \"current\" state needs to be kept but also all historical records.\n> >> I'm trying to decide between these two designs:\n> >>\n> >> 1) create a table for \"current\" data, one record for each sensor, update\n> >> this table when a sensor reading arrives, create a trigger that would\n> >> transfer old record data to a history table (of basically the same\n> >> structure)\n> >> 2) write only to the history table, use relatively complex queries or\n> >> outside-the-database magic to determine what the \"current\" values of the\n> >> sensors are.\n> >\n> > 3) write only to the history table, but have an INSERT trigger to update\n> the\n> > table with \"current\" data. This has the same performance characteristics\n> as\n> > 1, but let's you design your application like 2.\n>\n> Excellent idea!\n>\n> > I think I'd choose this approach (or 2), since it can handle out-of-order\n> or\n> > delayed arrival of sensor readings gracefully (assuming they are\n> timestamped\n> > at source).\n>\n> It seems like your approach is currently the winner.\n>\n> > If you go with 2, I'd recommend to still create a view to encapsulate the\n> > complex query for the current values, to make the application development\n> > simpler. And if it gets slow, you can easily swap the view with a table,\n> > updated with triggers or periodically, without changing the application.\n> >\n> >> The volume of sensor data is potentially huge, on the order of 500,000\n> >> updates per hour. Sensor data is few numeric(15,5) numbers.\n> >\n> > Whichever design you choose, you should also consider partitioning the\n> data.\n>\n> I'll look into it, but we'll first see if we can get away with\n> limiting the time the data needs to be available.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI currently have a database doing something very similar.  I setup partition tables with predictable names based on the the data's timestamp week number eg:  (Data_YYYY_WI).  I have a tigger on the parent partition table to redirect data to the correct partition( tablename:='Data_' || to_char('$NEW(ts)'::timestamptz, 'IYYY_IW') ) .  then I use dynamic sql to do the insert.  I did some optimization by writting it in pl/TCL and using global variables to store prepared insert statements.\nMost queries for me are based  on the date and we have decent performance with our current setup.  For last/current sensor data we just store the last dataID in the sensor record.  I haven't thought of a better way yet.  After batch inserts we caculate the last reading for each participating sensorID inserted.\nWith partition tables we struggled with the query to get the lastest data :  select * from \"Data\" where \"sensorID\"=x order by ts limit 1  -- for parition tables.   See (http://archives.postgresql.org/pgsql-performance/2008-11/msg00284.php)   \nOn Thu, May 28, 2009 at 7:55 AM, Ivan Voras <[email protected]> wrote:\n2009/5/28 Heikki Linnakangas <[email protected]>:\n> Ivan Voras wrote:\n>>\n>> I need to store data about sensor readings. There is a known (but\n>> configurable) number of sensors which can send update data at any time.\n>> The \"current\" state needs to be kept but also all historical records.\n>> I'm trying to decide between these two designs:\n>>\n>> 1) create a table for \"current\" data, one record for each sensor, update\n>> this table when a sensor reading arrives, create a trigger that would\n>> transfer old record data to a history table (of basically the same\n>> structure)\n>> 2) write only to the history table, use relatively complex queries or\n>> outside-the-database magic to determine what the \"current\" values of the\n>> sensors are.\n>\n> 3) write only to the history table, but have an INSERT trigger to update the\n> table with \"current\" data. This has the same performance characteristics as\n> 1, but let's you design your application like 2.\n\nExcellent idea!\n\n> I think I'd choose this approach (or 2), since it can handle out-of-order or\n> delayed arrival of sensor readings gracefully (assuming they are timestamped\n> at source).\n\nIt seems like your approach is currently the winner.\n\n> If you go with 2, I'd recommend to still create a view to encapsulate the\n> complex query for the current values, to make the application development\n> simpler. And if it gets slow, you can easily swap the view with a table,\n> updated with triggers or periodically, without changing the application.\n>\n>> The volume of sensor data is potentially huge, on the order of 500,000\n>> updates per hour. Sensor data is few numeric(15,5) numbers.\n>\n> Whichever design you choose, you should also consider partitioning the data.\n\nI'll look into it, but we'll first see if we can get away with\nlimiting the time the data needs to be available.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 28 May 2009 11:56:37 -0700", "msg_from": "Greg Jaman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" }, { "msg_contents": "I also forgot to note that I had no problems setting up replication via\nlondiste (skytools). The cronjob that creates the partition each week for\nme also adds the table to the replication set. As simple as:\n londiste.py londiste.ini provider add 'public.Data_YYYY_WI'\n londiste.py londiste.ini subscriber add 'public.Data_YYYY_WI'\n\n\n\n\n\n\nOn Thu, May 28, 2009 at 11:56 AM, Greg Jaman <[email protected]> wrote:\n\n> I currently have a database doing something very similar. I setup\n> partition tables with predictable names based on the the data's timestamp\n> week number eg: (Data_YYYY_WI).\n>\n> I have a tigger on the parent partition table to redirect data to the\n> correct partition( tablename:='Data_' || to_char('$NEW(ts)'::timestamptz,\n> 'IYYY_IW') ) . then I use dynamic sql to do the insert. I did some\n> optimization by writting it in pl/TCL and using global variables to store\n> prepared insert statements.\n>\n> Most queries for me are based on the date and we have decent performance\n> with our current setup. For last/current sensor data we just store the last\n> dataID in the sensor record. I haven't thought of a better way yet. After\n> batch inserts we caculate the last reading for each participating sensorID\n> inserted.\n>\n> With partition tables we struggled with the query to get the lastest data\n> : select * from \"Data\" where \"sensorID\"=x order by ts limit 1 -- for\n> parition tables. See (\n> http://archives.postgresql.org/pgsql-performance/2008-11/msg00284.php)\n>\n>\n>\n>\n>\n> On Thu, May 28, 2009 at 7:55 AM, Ivan Voras <[email protected]> wrote:\n>\n>> 2009/5/28 Heikki Linnakangas <[email protected]>:\n>> > Ivan Voras wrote:\n>> >>\n>> >> I need to store data about sensor readings. There is a known (but\n>> >> configurable) number of sensors which can send update data at any time.\n>> >> The \"current\" state needs to be kept but also all historical records.\n>> >> I'm trying to decide between these two designs:\n>> >>\n>> >> 1) create a table for \"current\" data, one record for each sensor,\n>> update\n>> >> this table when a sensor reading arrives, create a trigger that would\n>> >> transfer old record data to a history table (of basically the same\n>> >> structure)\n>> >> 2) write only to the history table, use relatively complex queries or\n>> >> outside-the-database magic to determine what the \"current\" values of\n>> the\n>> >> sensors are.\n>> >\n>> > 3) write only to the history table, but have an INSERT trigger to update\n>> the\n>> > table with \"current\" data. This has the same performance characteristics\n>> as\n>> > 1, but let's you design your application like 2.\n>>\n>> Excellent idea!\n>>\n>> > I think I'd choose this approach (or 2), since it can handle\n>> out-of-order or\n>> > delayed arrival of sensor readings gracefully (assuming they are\n>> timestamped\n>> > at source).\n>>\n>> It seems like your approach is currently the winner.\n>>\n>> > If you go with 2, I'd recommend to still create a view to encapsulate\n>> the\n>> > complex query for the current values, to make the application\n>> development\n>> > simpler. And if it gets slow, you can easily swap the view with a table,\n>> > updated with triggers or periodically, without changing the application.\n>> >\n>> >> The volume of sensor data is potentially huge, on the order of 500,000\n>> >> updates per hour. Sensor data is few numeric(15,5) numbers.\n>> >\n>> > Whichever design you choose, you should also consider partitioning the\n>> data.\n>>\n>> I'll look into it, but we'll first see if we can get away with\n>> limiting the time the data needs to be available.\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\nI also forgot to note that I had no problems setting up replication via londiste (skytools).   The cronjob that creates the partition each week for me also adds the table to the replication set.   As simple as:   londiste.py londiste.ini provider add 'public.Data_YYYY_WI'\n   londiste.py londiste.ini subscriber add 'public.Data_YYYY_WI'On Thu, May 28, 2009 at 11:56 AM, Greg Jaman <[email protected]> wrote:\nI currently have a database doing something very similar.  I setup partition tables with predictable names based on the the data's timestamp week number eg:  (Data_YYYY_WI).  \nI have a tigger on the parent partition table to redirect data to the correct partition( tablename:='Data_' || to_char('$NEW(ts)'::timestamptz, 'IYYY_IW') ) .  then I use dynamic sql to do the insert.  I did some optimization by writting it in pl/TCL and using global variables to store prepared insert statements.\nMost queries for me are based  on the date and we have decent performance with our current setup.  For last/current sensor data we just store the last dataID in the sensor record.  I haven't thought of a better way yet.  After batch inserts we caculate the last reading for each participating sensorID inserted.\nWith partition tables we struggled with the query to get the lastest data :  select * from \"Data\" where \"sensorID\"=x order by ts limit 1  -- for parition tables.   See (http://archives.postgresql.org/pgsql-performance/2008-11/msg00284.php)   \n\nOn Thu, May 28, 2009 at 7:55 AM, Ivan Voras <[email protected]> wrote:\n\n2009/5/28 Heikki Linnakangas <[email protected]>:\n> Ivan Voras wrote:\n>>\n>> I need to store data about sensor readings. There is a known (but\n>> configurable) number of sensors which can send update data at any time.\n>> The \"current\" state needs to be kept but also all historical records.\n>> I'm trying to decide between these two designs:\n>>\n>> 1) create a table for \"current\" data, one record for each sensor, update\n>> this table when a sensor reading arrives, create a trigger that would\n>> transfer old record data to a history table (of basically the same\n>> structure)\n>> 2) write only to the history table, use relatively complex queries or\n>> outside-the-database magic to determine what the \"current\" values of the\n>> sensors are.\n>\n> 3) write only to the history table, but have an INSERT trigger to update the\n> table with \"current\" data. This has the same performance characteristics as\n> 1, but let's you design your application like 2.\n\nExcellent idea!\n\n> I think I'd choose this approach (or 2), since it can handle out-of-order or\n> delayed arrival of sensor readings gracefully (assuming they are timestamped\n> at source).\n\nIt seems like your approach is currently the winner.\n\n> If you go with 2, I'd recommend to still create a view to encapsulate the\n> complex query for the current values, to make the application development\n> simpler. And if it gets slow, you can easily swap the view with a table,\n> updated with triggers or periodically, without changing the application.\n>\n>> The volume of sensor data is potentially huge, on the order of 500,000\n>> updates per hour. Sensor data is few numeric(15,5) numbers.\n>\n> Whichever design you choose, you should also consider partitioning the data.\n\nI'll look into it, but we'll first see if we can get away with\nlimiting the time the data needs to be available.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 28 May 2009 12:38:41 -0700", "msg_from": "Greg Jaman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing sensor data" } ]
[ { "msg_contents": "Hmmm. Anyone out there have the Continuent solution working with PostgreSQL?\nIf so, what release? We're at 8.3 right now.\n\nthanks,\n-Alan\np.s. I'm continuing the cross-post because that is the way I started\nthis thread. Future threads will not be cross-posted.\n\nOn Thu, May 28, 2009 at 9:34 AM, Scot Kreienkamp <[email protected]> wrote:\n> We didn't have much luck with Continuent.  They had to make multiple\n> code level changes to get their product to work correctly with our app\n> on PG 8.2.  We never did get it successfully implemented.  At this point\n> I'm stuck with WAL shipping as I can't find anything that fits my\n> constraints.\n>\n> Thanks,\n>\n> Scot Kreienkamp\n\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n", "msg_date": "Thu, 28 May 2009 12:15:02 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Continuent (was: Postgres Clustering)" }, { "msg_contents": "----- \"Alan McKay\" <[email protected]> escreveu: \n> Hmmm. Anyone out there have the Continuent solution working with PostgreSQL? \n> If so, what release? We're at 8.3 right now. \n\nI have tested Sequoia 2.10.10 with a high transaction rate database with good servers and plenty of memory. Since that's a OLTP system the performance droped to as low as 30%. \n\nI can't recomend their solution for a OLAP system because I never tested in this situation. \n\nConfiguration of Sequoia is quite complicated and I think a very good Database Administrator is needed to keep it working correctly and nodes syncronized. \nSequoia also is very complicated to run ddl and dml scripts since your scrips should be written for Sequoia, not for PostgreSQL. \n\nIf log-shipping works for you, try Slony. Your slaves can serve as read-only databases and you can distribute some load. \n\nFlavio \n\n----- \"Alan McKay\" <[email protected]> escreveu:\n> Hmmm.   Anyone out there have the Continuent solution working with PostgreSQL?> If so, what release?  We're at 8.3 right now.I have tested Sequoia 2.10.10 with a high transaction rate database with good servers and plenty of memory. Since that's a OLTP system the performance droped to as low as 30%.I can't recomend their solution for a OLAP system because I never tested in this situation.Configuration of Sequoia is quite complicated and I think a very good Database Administrator is needed to keep it working correctly and nodes syncronized.Sequoia also is very complicated to run ddl and dml scripts since your scrips should be written for Sequoia, not for PostgreSQL.If log-shipping works for you, try Slony. Your slaves can serve as read-only databases and you can distribute some load.Flavio", "msg_date": "Thu, 28 May 2009 21:26:32 -0300 (BRT)", "msg_from": "Flavio Henrique Araque Gurgel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Continuent (was: Postgres Clustering)" } ]
[ { "msg_contents": "HI.\n\nSomeone had some experience of bad performance with postgres in some server\nwith many processors?\n\nI have a server with 4 CPUS dual core and gives me a very good performance\nbut I have experienced problems with another server that has 8 CPUS quad\ncore (32 cores). The second one only gives me about 1.5 of performance of\nthe first one.\n\nMonitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,\neth, etc) except that processors regularly climb to 100%.\n\nI can see that the processes are waiting for CPU time:\n\nvmstat 1\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 0 0 0 47123020 117320 17141932 0 0 8 416 1548 2189 1 1\n99 0 0\n 0 0 0 47121904 117328 17141940 0 0 8 148 1428 2107 1 1\n98 0 0\n 0 0 0 47123144 117336 17141956 0 0 8 172 1391 1930 1 0\n99 0 0\n 0 0 0 47124756 117352 17141940 0 0 8 276 1327 2171 1 1\n98 0 0\n 0 0 0 47118556 117360 17141956 0 0 0 100 1452 2254 1 1\n98 0 0\n 2 0 0 47120364 117380 17141952 0 0 8 428 1526 2477 1 0\n99 0 0\n 1 0 0 47119372 117388 17141972 0 0 0 452 1581 2662 1 1\n98 0 0\n 0 0 0 47117948 117396 17141988 0 0 16 468 1705 3243 1 1\n97 0 0\n 0 0 0 47116708 117404 17142020 0 0 0 268 1610 2115 1 1\n99 0 0\n 0 0 0 47119688 117420 17142044 0 0 0 200 1545 1810 1 1\n98 0 0\n318 0 0 47116464 117440 17142052 0 0 0 532 1416 2396 1 0\n99 0 0\n500 0 0 47115224 117440 17142052 0 0 0 0 1118 322144 91\n5 4 0 0\n440 0 0 47114728 117440 17142044 0 0 0 0 1052 333137 90\n5 5 0 0\n339 0 0 47114484 117440 17142048 0 0 0 0 1061 337528 85\n4 11 0 0\n179 0 0 47114112 117440 17142048 0 0 0 0 1066 312873 71\n4 25 0 0\n 5 1 0 47122180 117468 17142028 0 0 192 3128 1958 136804 23\n2 75 1 0\n 3 0 0 47114264 117476 17142968 0 0 608 5828 2688 4684 7 2\n89 2 0\n 0 1 0 47109940 117484 17142876 0 0 512 5084 2248 3727 3 1\n94 2 0\n 0 1 0 47119692 117500 17143816 0 0 520 4976 2231 2941 2 1\n95 2 0\n\n\nHave postgres problems of lock or degradation of performance with many\nCPU's?\nAny comments?\n\nRH 5.2. PG 8.3.6 and 64 bits.\n\nthanks for the reply\nregars ...\n\n\nHI. Someone had some experience of bad performance with postgres in some server with many processors? \n\n I have a server with 4 CPUS dual core  and gives me a very good\nperformance but I have experienced problems with another server that\nhas 8 CPUS quad core (32 cores). The second one only gives me about 1.5 of performance of the first one.\n Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD, eth, etc) except that processors\nregularly climb to 100%. I can see that the processes are waiting for CPU time:vmstat 1procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st\n 0  0      0 47123020 117320 17141932    0    0     8   416 1548 2189  1  1 99  0  0 0  0      0 47121904 117328 17141940    0    0     8   148 1428 2107  1  1 98  0  0 0  0      0 47123144 117336 17141956    0    0     8   172 1391 1930  1  0 99  0  0\n 0  0      0 47124756 117352 17141940    0    0     8   276 1327 2171  1  1 98  0  0 0  0      0 47118556 117360 17141956    0    0     0   100 1452 2254  1  1 98  0  0 2  0      0 47120364 117380 17141952    0    0     8   428 1526 2477  1  0 99  0  0\n 1  0      0 47119372 117388 17141972    0    0     0   452 1581 2662  1  1 98  0  0 0  0      0 47117948 117396 17141988    0    0    16   468 1705 3243  1  1 97  0  0 0  0      0 47116708 117404 17142020    0    0     0   268 1610 2115  1  1 99  0  0\n 0  0      0 47119688 117420 17142044    0    0     0   200 1545 1810  1  1 98  0  0318  0      0 47116464 117440 17142052    0    0     0   532 1416 2396  1  0 99  0  0500  0      0 47115224 117440 17142052    0    0     0     0 1118 322144 91  5  4  0  0\n440  0      0 47114728 117440 17142044    0    0     0     0 1052 333137 90  5  5  0  0339  0      0 47114484 117440 17142048    0    0     0     0 1061 337528 85  4 11  0  0179  0      0 47114112 117440 17142048    0    0     0     0 1066 312873 71  4 25  0  0\n 5  1      0 47122180 117468 17142028    0    0   192  3128 1958 136804 23  2 75  1  0 3  0      0 47114264 117476 17142968    0    0   608  5828 2688 4684  7  2 89  2  0 0  1      0 47109940 117484 17142876    0    0   512  5084 2248 3727  3  1 94  2  0\n 0  1      0 47119692 117500 17143816    0    0   520  4976 2231 2941  2  1 95  2  0\nHave postgres problems of lock or degradation of performance with many CPU's?Any comments?\n RH 5.2. PG 8.3.6 and 64 bits. \n\n thanks for the reply \nregars ...", "msg_date": "Thu, 28 May 2009 12:50:56 -0600", "msg_from": "Fabrix <[email protected]>", "msg_from_op": true, "msg_subject": "Scalability in postgres" }, { "msg_contents": "On Thu, May 28, 2009 at 11:50 AM, Fabrix <[email protected]> wrote:\n> Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,\n> eth, etc) except that processors regularly climb to 100%.\n\nWhat kind of load are you putting the server under when this happens?\n\n> I can see that the processes are waiting for CPU time:\n>\n> procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy id wa st\n> 0 0 0 47119688 117420 17142044 0 0 0 200 1545 1810 1 1 98 0 0\n> 318 0 0 47116464 117440 17142052 0 0 0 532 1416 2396 1 0 99 0 0\n> 500 0 0 47115224 117440 17142052 0 0 0 0 1118 322144 91 5 4 0 0\n> 440 0 0 47114728 117440 17142044 0 0 0 0 1052 333137 90 5 5 0 0\n> 339 0 0 47114484 117440 17142048 0 0 0 0 1061 337528 85 4 11 0 0\n> 179 0 0 47114112 117440 17142048 0 0 0 0 1066 312873 71 4 25 0 0\n> 5 1 0 47122180 117468 17142028 0 0 192 3128 1958 136804 23 2 75 1 0\n> 3 0 0 47114264 117476 17142968 0 0 608 5828 2688 4684 7 2 89 2 0\n\nWow, that's some serious context-switching right there - 300k context\nswitches a second mean that the processors are spending a lot of their\ntime fighting for CPU time instead of doing any real work.\n\nIt appears that you have the server configured with a very high number\nof connections as well? My first suggestion would be to look at a way\nto limit the number of active connections to the server at a time\n(pgPool or similar).\n\n-Dave\n", "msg_date": "Thu, 28 May 2009 13:12:17 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, May 28, 2009 at 12:50 PM, Fabrix <[email protected]> wrote:\n>\n> HI.\n>\n> Someone had some experience of bad performance with postgres in some server\n> with many processors?\n\nSeems to depend on the processors and chipset a fair bit.\n\n> I have a server with 4 CPUS dual core  and gives me a very good performance\n> but I have experienced problems with another server that has 8 CPUS quad\n> core (32 cores). The second one only gives me about 1.5 of performance of\n> the first one.\n\nWhat model CPUs and chipset on the mobo I wonder?\n\n> Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,\n> eth, etc) except that processors regularly climb to 100%.\n>\n> I can see that the processes are waiting for CPU time:\n>\n> vmstat 1\n>\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id\n> wa st\n>  0  0      0 47123020 117320 17141932    0    0     8   416 1548 2189  1  1\n> 99  0  0\n>  0  0      0 47121904 117328 17141940    0    0     8   148 1428 2107  1  1\n> 98  0  0\n>  0  0      0 47123144 117336 17141956    0    0     8   172 1391 1930  1  0\n> 99  0  0\n>  0  0      0 47124756 117352 17141940    0    0     8   276 1327 2171  1  1\n> 98  0  0\n>  0  0      0 47118556 117360 17141956    0    0     0   100 1452 2254  1  1\n> 98  0  0\n>  2  0      0 47120364 117380 17141952    0    0     8   428 1526 2477  1  0\n> 99  0  0\n>  1  0      0 47119372 117388 17141972    0    0     0   452 1581 2662  1  1\n> 98  0  0\n>  0  0      0 47117948 117396 17141988    0    0    16   468 1705 3243  1  1\n> 97  0  0\n>  0  0      0 47116708 117404 17142020    0    0     0   268 1610 2115  1  1\n> 99  0  0\n>  0  0      0 47119688 117420 17142044    0    0     0   200 1545 1810  1  1\n> 98  0  0\n> 318  0      0 47116464 117440 17142052    0    0     0   532 1416 2396  1  0\n> 99  0  0\n> 500  0      0 47115224 117440 17142052    0    0     0     0 1118 322144 91\n> 5  4  0  0\n> 440  0      0 47114728 117440 17142044    0    0     0     0 1052 333137 90\n> 5  5  0  0\n> 339  0      0 47114484 117440 17142048    0    0     0     0 1061 337528 85\n> 4 11  0  0\n> 179  0      0 47114112 117440 17142048    0    0     0     0 1066 312873 71\n> 4 25  0  0\n>  5  1      0 47122180 117468 17142028    0    0   192  3128 1958 136804 23\n> 2 75  1  0\n>  3  0      0 47114264 117476 17142968    0    0   608  5828 2688 4684  7  2\n> 89  2  0\n>  0  1      0 47109940 117484 17142876    0    0   512  5084 2248 3727  3  1\n> 94  2  0\n>  0  1      0 47119692 117500 17143816    0    0   520  4976 2231 2941  2  1\n> 95  2  0\n>\n>\n> Have postgres problems of lock or degradation of performance with many\n> CPU's?\n> Any comments?\n\nLooks like a context switch storm, which was pretty common on older\nXeon CPUs. I imagine with enough pg processes running on enough CPUs\nit could still be a problem.\n", "msg_date": "Thu, 28 May 2009 14:22:27 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Thanks David...\n\n\n\n2009/5/28 David Rees <[email protected]>\n\n> On Thu, May 28, 2009 at 11:50 AM, Fabrix <[email protected]> wrote:\n> > Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,\n> > eth, etc) except that processors regularly climb to 100%.\n>\n> What kind of load are you putting the server under when this happens?\n>\n\nI have many windows clients connecting to the database for odbc, they do\nselect, insert and update data. All these operations are all answer very\nquickly in less than 1 second are well optimized, but when processors go up\nto 100% All queries go up from 10 to 18 seconds, and are the same type of\noperations when this happends.\n\n\n> > I can see that the processes are waiting for CPU time:\n> >\n> > procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n> > r b swpd free buff cache si so bi bo in cs us sy\n> id wa st\n> > 0 0 0 47119688 117420 17142044 0 0 0 200 1545 1810 1\n> 1 98 0 0\n> > 318 0 0 47116464 117440 17142052 0 0 0 532 1416 2396 1\n> 0 99 0 0\n> > 500 0 0 47115224 117440 17142052 0 0 0 0 1118 322144\n> 91 5 4 0 0\n> > 440 0 0 47114728 117440 17142044 0 0 0 0 1052 333137\n> 90 5 5 0 0\n> > 339 0 0 47114484 117440 17142048 0 0 0 0 1061 337528\n> 85 4 11 0 0\n> > 179 0 0 47114112 117440 17142048 0 0 0 0 1066 312873\n> 71 4 25 0 0\n> > 5 1 0 47122180 117468 17142028 0 0 192 3128 1958 136804\n> 23 2 75 1 0\n> > 3 0 0 47114264 117476 17142968 0 0 608 5828 2688 4684 7\n> 2 89 2 0\n>\n> Wow, that's some serious context-switching right there - 300k context\n> switches a second mean that the processors are spending a lot of their\n> time fighting for CPU time instead of doing any real work.\n>\n> It appears that you have the server configured with a very high number\n> of connections as well? My first suggestion would be to look at a way\n> to limit the number of active connections to the server at a time\n> (pgPool or similar).\n\n\nyes, i have max_connections = 5000\ncan lower, but at least i need 3500 connections\n\n\n>\n>\n> -Dave\n>\n\nThanks David...2009/5/28 David Rees <[email protected]>\nOn Thu, May 28, 2009 at 11:50 AM, Fabrix <[email protected]> wrote:\n> Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,\n> eth, etc) except that processors regularly climb to 100%.\n\nWhat kind of load are you putting the server under when this happens?\nI have many windows clients connecting to the database for odbc, they do select,\ninsert and update data. All these operations are all answer very\nquickly in less than 1 second are well optimized, but when processors\ngo up to 100% All queries go up from 10  to 18 seconds, and are the same\ntype of operations when this happends.\n> I can see that the processes are waiting for CPU time:\n>\n> procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------\n>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st\n>  0  0      0 47119688 117420 17142044    0    0     0   200 1545 1810  1  1 98  0  0\n> 318  0      0 47116464 117440 17142052    0    0     0   532 1416 2396  1  0 99  0  0\n> 500  0      0 47115224 117440 17142052    0    0     0     0 1118 322144 91  5  4  0  0\n> 440  0      0 47114728 117440 17142044    0    0     0     0 1052 333137 90  5  5  0  0\n> 339  0      0 47114484 117440 17142048    0    0     0     0 1061 337528 85  4 11  0  0\n> 179  0      0 47114112 117440 17142048    0    0     0     0 1066 312873 71  4 25  0  0\n>  5  1      0 47122180 117468 17142028    0    0   192  3128 1958 136804 23  2 75  1  0\n>  3  0      0 47114264 117476 17142968    0    0   608  5828 2688 4684  7  2 89  2  0\n\nWow, that's some serious context-switching right there - 300k context\nswitches a second mean that the processors are spending a lot of their\ntime fighting for CPU time instead of doing any real work.\n\nIt appears that you have the server configured with a very high number\nof connections as well?  My first suggestion would be to look at a way\nto limit the number of active connections to the server at a time\n(pgPool or similar).yes, i have max_connections = 5000\ncan lower, but at least i need 3500 connections \n\n\n-Dave", "msg_date": "Thu, 28 May 2009 14:53:15 -0600", "msg_from": "Fabrix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, May 28, 2009 at 4:53 PM, Fabrix <[email protected]> wrote:\n\n>\n>\n>>\n>> Wow, that's some serious context-switching right there - 300k context\n>> switches a second mean that the processors are spending a lot of their\n>> time fighting for CPU time instead of doing any real work.\n>\n>\n There is a bug in the quad core chips during a massive amount of\nconnections that will cause all cores to go to 100% utilization and no work\nbe done. I'm digging to find links, but if I remember correctly, the only\nway to fix it was to disable the 4th core in linux (involved some black\nmagic in /proc). You really need to lower the number of processes you're\nforcing each processor bus to switch through (or switch to AMD's\nhyper-transport bus).\n\n\n>\n>>\n>> It appears that you have the server configured with a very high number\n>> of connections as well? My first suggestion would be to look at a way\n>> to limit the number of active connections to the server at a time\n>> (pgPool or similar).\n>\n>\n> yes, i have max_connections = 5000\n> can lower, but at least i need 3500 connections\n>\n\nTypically, it's a bad idea to run run with anything over 1000 connections\n(many will suggest lower than that). If you need that many connections,\nyou'll want to look at a connection pool like pgBouncer or pgPool.\n\n--Scott\n\nOn Thu, May 28, 2009 at 4:53 PM, Fabrix <[email protected]> wrote:\n\n\n\nWow, that's some serious context-switching right there - 300k context\nswitches a second mean that the processors are spending a lot of their\ntime fighting for CPU time instead of doing any real work.  There is a bug in the quad core chips during a massive amount of connections that will cause all cores to go to 100% utilization and no work be done.  I'm digging to find links, but if I remember correctly, the only way to fix it was to disable the 4th core in linux (involved some black magic in /proc).  You really need to lower the number of processes you're forcing each processor bus to switch through (or switch to AMD's hyper-transport bus).\n \n\n\nIt appears that you have the server configured with a very high number\nof connections as well?  My first suggestion would be to look at a way\nto limit the number of active connections to the server at a time\n(pgPool or similar).yes, i have max_connections = 5000\ncan lower, but at least i need 3500 connectionsTypically, it's a bad idea to run run with anything over 1000 connections (many will suggest lower than that).  If you need that many connections, you'll want to look at a connection pool like pgBouncer or pgPool.  \n--Scott", "msg_date": "Thu, 28 May 2009 16:59:05 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, May 28, 2009 at 2:53 PM, Fabrix <[email protected]> wrote:\n> yes, i have max_connections = 5000\n> can lower, but at least i need 3500 connections\n\n\nWhoa, that's a lot. Can you look into connection pooling of some sort?\n", "msg_date": "Thu, 28 May 2009 15:11:18 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Thanks Scott\n\n2009/5/28 Scott Marlowe <[email protected]>\n\n> On Thu, May 28, 2009 at 12:50 PM, Fabrix <[email protected]> wrote:\n> >\n> > HI.\n> >\n> > Someone had some experience of bad performance with postgres in some\n> server\n> > with many processors?\n>\n> Seems to depend on the processors and chipset a fair bit.\n>\n> > I have a server with 4 CPUS dual core and gives me a very good\n> performance\n> > but I have experienced problems with another server that has 8 CPUS quad\n> > core (32 cores). The second one only gives me about 1.5 of performance of\n> > the first one.\n>\n> What model CPUs and chipset on the mobo I wonder?\n\n\nI have 2 Servers of this type in which I tested, an HP and IBM. HP gives me\nbetter performance, IBM already discarded\n\nServer HP:\ncat /proc/cpuinfo\nprocessor : 0\nvendor_id : AuthenticAMD\ncpu family : 16\nmodel : 2\nmodel name : Quad-Core AMD Opteron(tm) Processor 8360 SE\nstepping : 3\ncpu MHz : 2500.091\ncache size : 512 KB\nphysical id : 0\nsiblings : 4\ncore id : 0\ncpu cores : 4\nfpu : yes\nfpu_exception : yes\ncpuid level : 5\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov\npat pse36 clflush mmx fxsr sse sse2 ht syscall mmxext fxsr_opt pdpe1gb\nrdtscp lm 3dnowext 3dnow constant_tsc pni cx16 popcnt lahf_lm cmp_legacy svm\nextapic cr8_legacy altmovcr8 abm sse4a misalignsse 3dnowprefetch osvw\nbogomips : 5004.94\nTLB size : 1024 4K pages\nclflush size : 64\ncache_alignment : 64\naddress sizes : 48 bits physical, 48 bits virtual\npower management: ts ttp tm stc 100mhzsteps hwpstate [8]\n\n\nServer IBM:\n cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU X7350 @ 2.93GHz\nstepping : 11\ncpu MHz : 2931.951\ncache size : 4096 KB\nphysical id : 3\nsiblings : 4\ncore id : 0\ncpu cores : 4\napicid : 12\nfpu : yes\nfpu_exception : yes\ncpuid level : 10\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov\npat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm\nconstant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm\nbogomips : 5867.00\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\n\n>\n> > Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,\n> > eth, etc) except that processors regularly climb to 100%.\n> >\n> > I can see that the processes are waiting for CPU time:\n> >\n> > vmstat 1\n> >\n> > procs -----------memory---------- ---swap-- -----io---- --system--\n> > -----cpu------\n> > r b swpd free buff cache si so bi bo in cs us sy\n> id\n> > wa st\n> > 0 0 0 47123020 117320 17141932 0 0 8 416 1548 2189 1\n> 1\n> > 99 0 0\n> > 0 0 0 47121904 117328 17141940 0 0 8 148 1428 2107 1\n> 1\n> > 98 0 0\n> > 0 0 0 47123144 117336 17141956 0 0 8 172 1391 1930 1\n> 0\n> > 99 0 0\n> > 0 0 0 47124756 117352 17141940 0 0 8 276 1327 2171 1\n> 1\n> > 98 0 0\n> > 0 0 0 47118556 117360 17141956 0 0 0 100 1452 2254 1\n> 1\n> > 98 0 0\n> > 2 0 0 47120364 117380 17141952 0 0 8 428 1526 2477 1\n> 0\n> > 99 0 0\n> > 1 0 0 47119372 117388 17141972 0 0 0 452 1581 2662 1\n> 1\n> > 98 0 0\n> > 0 0 0 47117948 117396 17141988 0 0 16 468 1705 3243 1\n> 1\n> > 97 0 0\n> > 0 0 0 47116708 117404 17142020 0 0 0 268 1610 2115 1\n> 1\n> > 99 0 0\n> > 0 0 0 47119688 117420 17142044 0 0 0 200 1545 1810 1\n> 1\n> > 98 0 0\n> > 318 0 0 47116464 117440 17142052 0 0 0 532 1416 2396\n> 1 0\n> > 99 0 0\n> > 500 0 0 47115224 117440 17142052 0 0 0 0 1118 322144\n> 91\n> > 5 4 0 0\n> > 440 0 0 47114728 117440 17142044 0 0 0 0 1052 333137\n> 90\n> > 5 5 0 0\n> > 339 0 0 47114484 117440 17142048 0 0 0 0 1061 337528\n> 85\n> > 4 11 0 0\n> > 179 0 0 47114112 117440 17142048 0 0 0 0 1066 312873\n> 71\n> > 4 25 0 0\n> > 5 1 0 47122180 117468 17142028 0 0 192 3128 1958 136804\n> 23\n> > 2 75 1 0\n> > 3 0 0 47114264 117476 17142968 0 0 608 5828 2688 4684 7\n> 2\n> > 89 2 0\n> > 0 1 0 47109940 117484 17142876 0 0 512 5084 2248 3727 3\n> 1\n> > 94 2 0\n> > 0 1 0 47119692 117500 17143816 0 0 520 4976 2231 2941 2\n> 1\n> > 95 2 0\n> >\n> >\n> > Have postgres problems of lock or degradation of performance with many\n> > CPU's?\n> > Any comments?\n>\n> Looks like a context switch storm, which was pretty common on older\n> Xeon CPUs. I imagine with enough pg processes running on enough CPUs\n> it could still be a problem.\n\n\nThese CPUs are very new.\n\nThanks Scott2009/5/28 Scott Marlowe <[email protected]>\nOn Thu, May 28, 2009 at 12:50 PM, Fabrix <[email protected]> wrote:\n>\n> HI.\n>\n> Someone had some experience of bad performance with postgres in some server\n> with many processors?\n\nSeems to depend on the processors and chipset a fair bit.\n\n> I have a server with 4 CPUS dual core  and gives me a very good performance\n> but I have experienced problems with another server that has 8 CPUS quad\n> core (32 cores). The second one only gives me about 1.5 of performance of\n> the first one.\n\nWhat model CPUs and chipset on the mobo I wonder?I have 2 Servers of this type in which I tested, an HP and IBM. HP gives me better performance, IBM already discardedServer HP:cat /proc/cpuinfo \nprocessor    : 0vendor_id    : AuthenticAMDcpu family    : 16model        : 2model name    : Quad-Core AMD Opteron(tm) Processor 8360 SEstepping    : 3cpu MHz        : 2500.091cache size    : 512 KB\nphysical id    : 0siblings    : 4core id        : 0cpu cores    : 4fpu        : yesfpu_exception    : yescpuid level    : 5wp        : yesflags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc pni cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy altmovcr8 abm sse4a misalignsse 3dnowprefetch osvw\nbogomips    : 5004.94TLB size    : 1024 4K pagesclflush size    : 64cache_alignment    : 64address sizes    : 48 bits physical, 48 bits virtualpower management: ts ttp tm stc 100mhzsteps hwpstate [8]\nServer IBM: cat /proc/cpuinfo processor    : 0vendor_id    : GenuineIntelcpu family    : 6model        : 15model name    : Intel(R) Xeon(R) CPU           X7350  @ 2.93GHzstepping    : 11\ncpu MHz        : 2931.951cache size    : 4096 KBphysical id    : 3siblings    : 4core id        : 0cpu cores    : 4apicid        : 12fpu        : yesfpu_exception    : yescpuid level    : 10\nwp        : yesflags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm\nbogomips    : 5867.00clflush size    : 64cache_alignment    : 64address sizes    : 40 bits physical, 48 bits virtualpower management:\n\n\n> Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,\n> eth, etc) except that processors regularly climb to 100%.\n>\n> I can see that the processes are waiting for CPU time:\n>\n> vmstat 1\n>\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id\n> wa st\n>  0  0      0 47123020 117320 17141932    0    0     8   416 1548 2189  1  1\n> 99  0  0\n>  0  0      0 47121904 117328 17141940    0    0     8   148 1428 2107  1  1\n> 98  0  0\n>  0  0      0 47123144 117336 17141956    0    0     8   172 1391 1930  1  0\n> 99  0  0\n>  0  0      0 47124756 117352 17141940    0    0     8   276 1327 2171  1  1\n> 98  0  0\n>  0  0      0 47118556 117360 17141956    0    0     0   100 1452 2254  1  1\n> 98  0  0\n>  2  0      0 47120364 117380 17141952    0    0     8   428 1526 2477  1  0\n> 99  0  0\n>  1  0      0 47119372 117388 17141972    0    0     0   452 1581 2662  1  1\n> 98  0  0\n>  0  0      0 47117948 117396 17141988    0    0    16   468 1705 3243  1  1\n> 97  0  0\n>  0  0      0 47116708 117404 17142020    0    0     0   268 1610 2115  1  1\n> 99  0  0\n>  0  0      0 47119688 117420 17142044    0    0     0   200 1545 1810  1  1\n> 98  0  0\n> 318  0      0 47116464 117440 17142052    0    0     0   532 1416 2396  1  0\n> 99  0  0\n> 500  0      0 47115224 117440 17142052    0    0     0     0 1118 322144 91\n> 5  4  0  0\n> 440  0      0 47114728 117440 17142044    0    0     0     0 1052 333137 90\n> 5  5  0  0\n> 339  0      0 47114484 117440 17142048    0    0     0     0 1061 337528 85\n> 4 11  0  0\n> 179  0      0 47114112 117440 17142048    0    0     0     0 1066 312873 71\n> 4 25  0  0\n>  5  1      0 47122180 117468 17142028    0    0   192  3128 1958 136804 23\n> 2 75  1  0\n>  3  0      0 47114264 117476 17142968    0    0   608  5828 2688 4684  7  2\n> 89  2  0\n>  0  1      0 47109940 117484 17142876    0    0   512  5084 2248 3727  3  1\n> 94  2  0\n>  0  1      0 47119692 117500 17143816    0    0   520  4976 2231 2941  2  1\n> 95  2  0\n>\n>\n> Have postgres problems of lock or degradation of performance with many\n> CPU's?\n> Any comments?\n\nLooks like a context switch storm, which was pretty common on older\nXeon CPUs.  I imagine with enough pg processes running on enough CPUs\nit could still be a problem.These CPUs are very new.", "msg_date": "Thu, 28 May 2009 15:28:01 -0600", "msg_from": "Fabrix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/28 Scott Mead <[email protected]>\n\n> On Thu, May 28, 2009 at 4:53 PM, Fabrix <[email protected]> wrote:\n>\n>>\n>>\n>>>\n>>> Wow, that's some serious context-switching right there - 300k context\n>>> switches a second mean that the processors are spending a lot of their\n>>> time fighting for CPU time instead of doing any real work.\n>>\n>>\n> There is a bug in the quad core chips during a massive amount of\n> connections that will cause all cores to go to 100% utilization and no work\n> be done. I'm digging to find links, but if I remember correctly, the only\n> way to fix it was to disable the 4th core in linux (involved some black\n> magic in /proc). You really need to lower the number of processes you're\n> forcing each processor bus to switch through (or switch to AMD's\n> hyper-transport bus).\n>\n\nThe server HP is already AMD proccesor.\nThe server with 4 dual core had max_connections = 5000 too, but the maximum\nof connections at time were 1800 and work very well.\n\nIf you get the link on the bug's quad core I would greatly appreciate\n\n>\n>\n>>\n>>>\n>>> It appears that you have the server configured with a very high number\n>>> of connections as well? My first suggestion would be to look at a way\n>>> to limit the number of active connections to the server at a time\n>>> (pgPool or similar).\n>>\n>>\n>> yes, i have max_connections = 5000\n>> can lower, but at least i need 3500 connections\n>>\n>\n> Typically, it's a bad idea to run run with anything over 1000 connections\n> (many will suggest lower than that). If you need that many connections,\n> you'll want to look at a connection pool like pgBouncer or pgPool.\n>\n\n\nPostgres does not support more than 1000? even the server is very robust?\nI will try to lower... and already i have a pool (not pgpool and not\npgBouncer). I have distributed all connections in three servers :).\n\n>\n> --Scott\n>\n>\n>\n\n2009/5/28 Scott Mead <[email protected]>\nOn Thu, May 28, 2009 at 4:53 PM, Fabrix <[email protected]> wrote:\n\n\n\nWow, that's some serious context-switching right there - 300k context\nswitches a second mean that the processors are spending a lot of their\ntime fighting for CPU time instead of doing any real work.  There is a bug in the quad core chips during a massive amount of connections that will cause all cores to go to 100% utilization and no work be done.  I'm digging to find links, but if I remember correctly, the only way to fix it was to disable the 4th core in linux (involved some black magic in /proc).  You really need to lower the number of processes you're forcing each processor bus to switch through (or switch to AMD's hyper-transport bus).\nThe server HP is already AMD  proccesor.The server with 4 dual core\nhad max_connections = 5000 too, but the maximum of connections at time\nwere 1800 and work very well. If you get the link on the bug's quad core I would greatly appreciate \n\n \n\n\nIt appears that you have the server configured with a very high number\nof connections as well?  My first suggestion would be to look at a way\nto limit the number of active connections to the server at a time\n(pgPool or similar).yes, i have max_connections = 5000\ncan lower, but at least i need 3500 connectionsTypically, it's a bad idea to run run with anything over 1000 connections (many will suggest lower than that).  If you need that many connections, you'll want to look at a connection pool like pgBouncer or pgPool.  \nPostgres does not support more than 1000? even the server is very robust?I will try to lower... and already i have a pool (not pgpool and not\npgBouncer). I have distributed all connections in three servers :). \n\n--Scott", "msg_date": "Thu, 28 May 2009 16:13:41 -0600", "msg_from": "Fabrix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "----- \"Scott Marlowe\" <[email protected]> escreveu: \n> On Thu, May 28, 2009 at 12:50 PM, Fabrix <[email protected]> wrote: \n> > \n> > HI. \n> > \n> > Someone had some experience of bad performance with postgres in some server \n> > with many processors? \n\nI had. \n\n> > but I have experienced problems with another server that has 8 CPUS quad \n> > core (32 cores). The second one only gives me about 1.5 of performance of \n> > the first one. \n\nI have had problems with 4 CPUS dual core Hyper Threading (16 logical CPUS). \n\n> What model CPUs and chipset on the mobo I wonder? \n> \n> > Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD, \n> > eth, etc) except that processors regularly climb to 100%. \n> > \n> > I can see that the processes are waiting for CPU time: \n\n> > Have postgres problems of lock or degradation of performance with many \n> > CPU's? \n> > Any comments? \n> \n> Looks like a context switch storm, which was pretty common on older \n> Xeon CPUs. I imagine with enough pg processes running on enough CPUs \n> it could still be a problem. \n\nI would ask for your kernel version. uname -a please? \n\nIt was possible to make the context work better with 2.4.24 with kswapd patched around here. 1600 connections working fine at this moment. \nTry to lower your memory requirements too. Linux kernel needs some space to page and scale up. Install some more memory otherwise. \n\nFlavio \n\n\n----- \"Scott Marlowe\" <[email protected]> escreveu:\n> On Thu, May 28, 2009 at 12:50 PM, Fabrix <[email protected]> wrote:> >> > HI.> >> > Someone had some experience of bad performance with postgres in some server> > with many processors?I had.> > but I have experienced problems with another server that has 8 CPUS quad> > core (32 cores). The second one only gives me about 1.5 of performance of> > the first one.I have had problems with 4 CPUS dual core Hyper Threading (16 logical CPUS).> What model CPUs and chipset on the mobo I wonder?> > > Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,> > eth, etc) except that processors regularly climb to 100%.> >> > I can see that the processes are waiting for CPU time:> > Have postgres problems of lock or degradation of performance with many> > CPU's?> > Any comments?> > Looks like a context switch storm, which was pretty common on older> Xeon CPUs.  I imagine with enough pg processes running on enough CPUs> it could still be a problem.I would ask for your kernel version. uname -a please?It was possible to make the context work better with 2.4.24 with kswapd patched around here. 1600 connections working fine at this moment.Try to lower your memory requirements too. Linux kernel needs some space to page and scale up. Install some more memory otherwise.Flavio", "msg_date": "Thu, 28 May 2009 21:35:12 -0300 (BRT)", "msg_from": "Flavio Henrique Araque Gurgel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/28 Flavio Henrique Araque Gurgel <[email protected]>\n\n> ----- \"Scott Marlowe\" <[email protected]> escreveu:\n> > On Thu, May 28, 2009 at 12:50 PM, Fabrix <[email protected]> wrote:\n> > >\n> > > HI.\n> > >\n> > > Someone had some experience of bad performance with postgres in some\n> server\n> > > with many processors?\n>\n> I had.\n>\n> > > but I have experienced problems with another server that has 8 CPUS\n> quad\n> > > core (32 cores). The second one only gives me about 1.5 of performance\n> of\n> > > the first one.\n>\n> I have had problems with 4 CPUS dual core Hyper Threading (16 logical\n> CPUS).\n>\n> > What model CPUs and chipset on the mobo I wonder?\n> >\n> > > Monitoring (nmon, htop, vmstat) see that everything is fine (memory,\n> HD,\n> > > eth, etc) except that processors regularly climb to 100%.\n> > >\n> > > I can see that the processes are waiting for CPU time:\n>\n> > > Have postgres problems of lock or degradation of performance with many\n> > > CPU's?\n> > > Any comments?\n> >\n> > Looks like a context switch storm, which was pretty common on older\n> > Xeon CPUs. I imagine with enough pg processes running on enough CPUs\n> > it could still be a problem.\n>\n> I would ask for your kernel version. uname -a please?\n>\n\nsure, and thanks for you answer Flavio...\n\nuname -a\nLinux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64\nx86_64 x86_64 GNU/Linux\n\ncat /etc/redhat-release\nRed Hat Enterprise Linux Server release 5.2 (Tikanga)\n\n\n>\n> It was possible to make the context work better with 2.4.24 with kswapd\n> patched around here. 1600 connections working fine at this moment.\n>\n\n2.4 is very old, or not?\n\n>\n> Try to lower your memory requirements too. Linux kernel needs some space to\n> page and scale up. Install some more memory otherwise.\n>\n\nhow much?\nalready I have a lot of memory installed in the server 128GB.\n\n\n> Flavio\n>\n>\n\n2009/5/28 Flavio Henrique Araque Gurgel <[email protected]>\n----- \"Scott Marlowe\" <[email protected]> escreveu:\n> On Thu, May 28, 2009 at 12:50 PM, Fabrix <[email protected]> wrote:> >> > HI.> >> > Someone had some experience of bad performance with postgres in some server\n> > with many processors?I had.> > but I have experienced problems with another server that has 8 CPUS quad> > core (32 cores). The second one only gives me about 1.5 of performance of\n> > the first one.I have had problems with 4 CPUS dual core Hyper Threading (16 logical CPUS).> What model CPUs and chipset on the mobo I wonder?> > > Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,\n> > eth, etc) except that processors regularly climb to 100%.> >> > I can see that the processes are waiting for CPU time:> > Have postgres problems of lock or degradation of performance with many\n> > CPU's?> > Any comments?> > Looks like a context switch storm, which was pretty common on older> Xeon CPUs.  I imagine with enough pg processes running on enough CPUs> it could still be a problem.\nI would ask for your kernel version. uname -a please?sure, and thanks for you answer Flavio... uname -aLinux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux\n cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.2 (Tikanga)\nIt was possible to make the context work better with 2.4.24 with kswapd patched around here. 1600 connections working fine at this moment.\n2.4 is very old, or not?\nTry to lower your memory requirements too. Linux kernel needs some space to page and scale up. Install some more memory otherwise.how much?already I have a lot of  memory installed in the server 128GB.\n\nFlavio", "msg_date": "Thu, 28 May 2009 19:04:32 -0600", "msg_from": "Fabrix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, May 28, 2009 at 7:04 PM, Fabrix <[email protected]> wrote:\n>> I would ask for your kernel version. uname -a please?\n>\n> sure, and thanks for you answer Flavio...\n>\n> uname -a\n> Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64\n> x86_64 x86_64 GNU/Linux\n>\n> cat /etc/redhat-release\n> Red Hat Enterprise Linux Server release 5.2 (Tikanga)\n\nI'm running the same thing for an 8 core opteron (dual x4) and I have\ngotten better numbers from later distros (ubuntu 8.04 and 8.10) but I\njust don't trust them yet as server OSes for a database.\n\n>> It was possible to make the context work better with 2.4.24 with kswapd\n>> patched around here. 1600 connections working fine at this moment.\n>\n> 2.4 is very old, or not?\n\nI'm sure he meant 2.6.24\n\nThere's been a LOT of work on the stuff that was causing context\nswtich storms from the 2.6.18 to the latest 2.6 releases.\n", "msg_date": "Thu, 28 May 2009 19:16:27 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "I would ask for your kernel version. uname -a please? \n\n> sure, and thanks for you answer Flavio... \n> \n\n> uname -a \n> Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux \n> \n> cat /etc/redhat-release \n> Red Hat Enterprise Linux Server release 5.2 (Tikanga) \n> \n\nI had the same problem you're saying with Debian Etch 2.6.18 when the system needed more then 1000 connections. \n\n> \n\n\n\n> \n> \n> It was possible to make the context work better with 2.4.24 with kswapd patched around here. 1600 connections working fine at this moment. \n\n> 2.4 is very old, or not? \n\nMy mistake. It is 2.6.24 \nWe had to apply the kswapd patch also. It's important specially if you see your system % going as high as 99% in top and loosing the machine's control. I have read something about 2.6.28 had this patch accepted in mainstream. \n\n\n\n\n\n> \n> Try to lower your memory requirements too. Linux kernel needs some space to page and scale up. Install some more memory otherwise. \n> \n\n> how much? \n> already I have a lot of memory installed in the server 128GB. \n\nHere we have 16GB. I had to limit PostgreSQL memory requirements (shared_buffers + max_connections * work_mem) to about 40% RAM. effective_cache_size was not an issue and about 30% of RAM is working fine. Of course the cache is a matter of your context. \nSince we have fast queries with low memory requirements for sorting or nested loops, 1.5MB for work_mem was enough around here. 2GB of shared buffers worked like a charm but it's too low for the indexes I work with and I'm planning to increase it when I have more RAM. \n\nFlavio \n\n> \nI would ask for your kernel version. uname -a please?> sure, and thanks for you answer Flavio... > > uname -a> Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux> \n > cat /etc/redhat-release > Red Hat Enterprise Linux Server release 5.2 (Tikanga)> I had the same problem you're saying with Debian Etch 2.6.18 when the system needed more then 1000 connections.> \n> > > It was possible to make the context work better with 2.4.24 with kswapd patched around here. 1600 connections working fine at this moment.\n> 2.4 is very old, or not?My mistake. It is 2.6.24We had to apply the kswapd patch also. It's important specially if you see your system % going as high as 99% in top and loosing the machine's control. I have read something about 2.6.28 had this patch accepted in mainstream.> \n> Try to lower your memory requirements too. Linux kernel needs some space to page and scale up. Install some more memory otherwise.> > how much?> already I have a lot of  memory installed in the server 128GB.Here we have 16GB. I had to limit PostgreSQL memory requirements (shared_buffers + max_connections * work_mem) to about 40% RAM. effective_cache_size was not an issue and about 30% of RAM is working fine. Of course the cache is a matter of your context.Since we have fast queries with low memory requirements for sorting or nested loops, 1.5MB for work_mem was enough around here. 2GB of shared buffers worked like a charm but it's too low for the indexes I work with and I'm planning to increase it when I have more RAM.Flavio>", "msg_date": "Thu, 28 May 2009 22:24:37 -0300 (BRT)", "msg_from": "Flavio Henrique Araque Gurgel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, 28 May 2009, Flavio Henrique Araque Gurgel wrote:\n\n> It is 2.6.24 We had to apply the kswapd patch also. It's important \n> specially if you see your system % going as high as 99% in top and \n> loosing the machine's control. I have read something about 2.6.28 had \n> this patch accepted in mainstream.\n\nIt would help if you gave more specific information about what you're \ntalking about. I know there was a bunch of back and forth on the \"kswapd \nshould only wait on IO if there is IO\" patch, where it was commited and \nthen reverted etc, but it's not clear to me if that's what you're talking \nabout--and if so, what that has to do with the context switch problem.\n\nBack to Fabrix's problem. You're fighting a couple of losing battles \nhere. Let's go over the initial list:\n\n1) You have 32 cores. You think they should be allowed to schedule\n>3500 active connections across them. That doesn't work, and what happens\nis exactly the sort of context switch storm you're showing data for. \nThink about it for a minute: how many of those can really be doing work \nat any time? 32, that's how many. Now, you need some multiple of the \nnumber of cores to try to make sure everybody is always busy, but that \nmultiple should be closer to 10X the number of cores rather than 100X. \nYou need to adjust the connection pool ratio so that the PostgreSQL \nmax_connections is closer to 500 than 5000, and this is by far the most \ncritical thing for you to do. The PostgreSQL connection handler is known \nto be bad at handling high connection loads compared to the popular \npooling projects, so you really shouldn't throw this problem at it. \nWhile kernel problems stack on top of that, you really shouldn't start at \nkernel fixes; nail the really fundamental and obvious problem first.\n\n2) You have very new hardware and a very old kernel. Once you've done the \nabove, if you're still not happy with performance, at that point you \nshould consider using a newer one. It's fairly simple to build a Linux \nkernel using the same basic kernel parameters as the stock RedHat one. \n2.6.28 is six months old now, is up to 2.6.28.10, and has gotten a lot \nmore testing than most kernels due to it being the Ubuntu 9.04 default. \nI'd suggest you try out that version.\n\n3) A system with 128GB of RAM is in a funny place where by using the \ndefaults or the usual rules of thumb for a lot of parameters (\"set \nshared_buffers to 1/4 of RAM\") are all bad ideas. shared_buffers seems to \ntop out its usefulness around 10GB on current generation \nhardware/software, and some Linux memory tunables have defaults on 2.6.18 \nthat are insane for your system; vm_dirty_ratio at 40 comes to mind as the \none I run into most. Some of that gets fixed just by moving to a newer \nkernel, some doesn't. Again, these aren't the problems you're having now \nthough; they're the ones you'll have in the future *if* you fix the more \nfundamental problems first.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 28 May 2009 21:54:52 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, May 29, 2009 at 2:54 AM, Greg Smith <[email protected]> wrote:\n\n>  The PostgreSQL connection handler is known to be bad at handling high\n> connection loads compared to the popular pooling projects, so you really\n> shouldn't throw this problem at it. While kernel problems stack on top of\n> that, you really shouldn't start at kernel fixes; nail the really\n> fundamental and obvious problem first.\n\nif it is implemented somewhere else better, shouldn't that make it\nobvious that postgresql should solve it internally ? It is really\nannoying to hear all the time that you should add additional path of\nexecution to already complex stack, and rely on more code to handle\nsomething (poolers).\n\n\n-- \nGJ\n", "msg_date": "Fri, 29 May 2009 11:45:02 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Grzegorz Jaśkiewicz <[email protected]>:\n> On Fri, May 29, 2009 at 2:54 AM, Greg Smith <[email protected]> wrote:\n>\n>>  The PostgreSQL connection handler is known to be bad at handling high\n>> connection loads compared to the popular pooling projects, so you really\n>> shouldn't throw this problem at it. While kernel problems stack on top of\n>> that, you really shouldn't start at kernel fixes; nail the really\n>> fundamental and obvious problem first.\n>\n> if it is implemented somewhere else better, shouldn't that make it\n> obvious that postgresql should solve it internally ? It is really\n> annoying to hear all the time that you should add additional path of\n> execution to already complex stack, and rely on more code to handle\n> something (poolers).\n\nWell Oracle I know suffers from the same issue under any real load.\nOn a large transactional system where I last worked, we had to keep\nthe live connections to the db down to under 100 to keep it running\nwell. The complexity has to go somewhere, and poolers are usually a\nbetter choice.\n", "msg_date": "Fri, 29 May 2009 06:10:44 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Grzegorz Jaśkiewicz <[email protected]>:\n> On Fri, May 29, 2009 at 2:54 AM, Greg Smith <[email protected]> wrote:\n>\n>>  The PostgreSQL connection handler is known to be bad at handling high\n>> connection loads compared to the popular pooling projects, so you really\n>> shouldn't throw this problem at it. While kernel problems stack on top of\n>> that, you really shouldn't start at kernel fixes; nail the really\n>> fundamental and obvious problem first.\n>\n> if it is implemented somewhere else better, shouldn't that make it\n> obvious that postgresql should solve it internally ? It is really\n> annoying to hear all the time that you should add additional path of\n> execution to already complex stack, and rely on more code to handle\n> something (poolers).\n\nOTOH, you're always free to submit a patch.\n", "msg_date": "Fri, 29 May 2009 06:11:10 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Scott Marlowe <[email protected]>:\n\n>> if it is implemented somewhere else better, shouldn't that make it\n>> obvious that postgresql should solve it internally ? It is really\n>> annoying to hear all the time that you should add additional path of\n>> execution to already complex stack, and rely on more code to handle\n>> something (poolers).\n>\n> OTOH, you're always free to submit a patch.\n:P\n\nI thought that's where the difference is between postgresql and oracle\nmostly, ability to handle more transactions and better scalability .\n\n\n\n\n-- \nGJ\n", "msg_date": "Fri, 29 May 2009 13:13:51 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Grzegorz Jaśkiewicz <[email protected]>:\n> 2009/5/29 Scott Marlowe <[email protected]>:\n>\n>>> if it is implemented somewhere else better, shouldn't that make it\n>>> obvious that postgresql should solve it internally ? It is really\n>>> annoying to hear all the time that you should add additional path of\n>>> execution to already complex stack, and rely on more code to handle\n>>> something (poolers).\n>>\n>> OTOH, you're always free to submit a patch.\n> :P\n>\n> I thought that's where the difference is between postgresql and oracle\n> mostly, ability to handle more transactions and better scalability .\n\nBoth Oracle and PostgreSQL have fairly heavy backend processes, and\nrunning hundreds of them on either database is a mistake. Sure,\nOracle can handle more transactions and scales a bit better, but no\none wants to have to buy a 128 way E15K to handle the load rather than\nimplementing connection pooling. Show me an Oracle server with 5000\nlive, active connections and I'll show you a VERY large and expensive\ncluster of machines.\n", "msg_date": "Fri, 29 May 2009 06:37:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Scott Marlowe <[email protected]>:\n\n>\n> Both Oracle and PostgreSQL have fairly heavy backend processes, and\n> running hundreds of them on either database is a mistake.    Sure,\n> Oracle can handle more transactions and scales a bit better, but no\n> one wants to have to buy a 128 way E15K to handle the load rather than\n> implementing connection pooling.  Show me an Oracle server with 5000\n> live, active connections and I'll show you a VERY large and expensive\n> cluster of machines.\n\nyes, because for that, oracle has nicer set of features that allows\nyou to create cluster on cheaper machines, instead of buying one ;)\n\nBut other thing, worth noticing from my own experience is that you\nhave to pay for Oracle so much, just to be able to enjoy it for a bit,\npeople tend to buy better servers.\nIt feels more pro if you have to pay for it. That's the observation\nfrom UK, at least.\n\n\n\n-- \nGJ\n", "msg_date": "Fri, 29 May 2009 13:41:49 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Grzegorz Jaśkiewicz <[email protected]>:\n> 2009/5/29 Scott Marlowe <[email protected]>:\n>\n>>\n>> Both Oracle and PostgreSQL have fairly heavy backend processes, and\n>> running hundreds of them on either database is a mistake.    Sure,\n>> Oracle can handle more transactions and scales a bit better, but no\n>> one wants to have to buy a 128 way E15K to handle the load rather than\n>> implementing connection pooling.  Show me an Oracle server with 5000\n>> live, active connections and I'll show you a VERY large and expensive\n>> cluster of machines.\n>\n> yes, because for that, oracle has nicer set of features that allows\n> you to create cluster on cheaper machines, instead of buying one ;)\n\nOTOH, I can buy a rather large Solaris box for the price of the\nlicenses on a RAC cluster. Then I can take what's leftover and go on\nvacation, then hire Tom Lane for a year to hack pgsql to run even\nfaster on my big sun server. And then buy everyone at my office an\nespresso machine. Meanwhile, a team of several Oracle DBAs will still\nbe trying to get RAC up and running reliably. But yes, they'll be\nusing a few $20k servers to do it.\n\n> But other thing, worth noticing from my own experience is that you\n> have to pay for Oracle so much, just to be able to enjoy it for a bit,\n> people tend to buy better servers.\n\nIn my experience that's not really true. Since Oracle charges per CPU\nit's not uncommon to see people running it on a machine with the\nabsolute minimum # of CPUs to do the job.\n\n> It feels more pro if you have to pay for it. That's the observation\n> from UK, at least.\n\nI don't think you can rightly speak for the whole of the UK on the\nsubject, anymore than I can speak for the whole of the US. :)\n\nI think that professional is as professional does. The real costs for\nOracle are the salaries you have go pay to the team of DBAs to make it\nwork and stay up. I've never dealt with a database that needs as much\nconstant hand holding as Oracle seems to. And, Oracle DBAs tend to do\none thing, and that one thing really well, since they're used to being\non a team. So, you've got one guy to do production patching and\nsupport, another guy to do query tuning, another guy to plan\ndeployments, and another guy to write plpgsql. Then one or two other\nfolks for your app / db interfacing.\n\nMuch like a Formula One car, Oracle is an impressive bit of\ntechnology. But it's damned expensive to buy and more expensive to\noperate. Oracle's not some magic pixie dust that fixes all your DB\nproblems, it's got its own set of issues that it brings to the table.\n", "msg_date": "Fri, 29 May 2009 06:57:41 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "damn I agree with you Scott. I wish I had enough cash here to employ\nTom and other pg magicians to improve performance for all of us ;)\n\nThing is tho, postgresql is mostly used by companies, that either\ndon't have that sort of cash, but still like to get the performance,\nor companies that have 'why pay if it is free' policy.\n\nnow, about UK, combined with Ireland still is bit smaller than US ;)\nI don't know how about US, but in UK most companies still believe that\nMSCE personnel, few windows servers, with mssql are the best thing you\ncan get... so.\n\nOracle is really used by very very small minority.\n", "msg_date": "Fri, 29 May 2009 14:07:45 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Grzegorz Jaśkiewicz <[email protected]>:\n> damn I agree with you Scott. I wish I had enough cash here to employ\n> Tom and other pg magicians to improve performance for all of us ;)\n>\n> Thing is tho, postgresql is mostly used by companies, that either\n> don't have that sort of cash, but still like to get the performance,\n> or companies that have 'why pay if it is free' policy.\n\nIt really is two very different kinds of companies. I have a friend\nwho used to work somewhere that the boss was a total cheapskate, and\nfor that reason, and that reason alone, had chosen PostgreSQL.\nBecause of his overly cheap ways, the company just sucked the life out\nof its employees.\n\nOTOH, I now work for a company that uses PostgreSQL because it's the\nbest fit solution that allows great performance on reasonable hardware\nfor little money. If Oracle provided a serious competitive advantage\nwe'd switch. But it really doesn't for us, and for 90% of the db folk\nout there it doesn't either.\n\n> now, about UK, combined with Ireland still is bit smaller than US ;)\n> I don't know how about US, but in UK most companies still believe that\n> MSCE personnel, few windows servers, with mssql are the best thing you\n> can get... so.\n\nThere's still plenty of that mentality here in the old USA as well.\nLast company I was at was taken over by managers with a MS and only MS\nmentality. They spent 4 years replacing a pair of Linux servers that\nprovided web services and LDAP with about 40 MS machines. CIO swore\nthere would be no more Linux in the company. Since then they've\nbought up a couple dozen smaller companies, most of which were running\non Linux, and they've had to hire back a lot of Linux talent to keep\nit running.\n\n> Oracle is really used by very very small minority.\n\nAnd sadly, an awful lot of those installations are Oracle \"by default\"\nnot because it's the best choice.\n", "msg_date": "Fri, 29 May 2009 07:21:00 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:\n\n> if it is implemented somewhere else better, shouldn't that make it\n> obvious that postgresql should solve it internally ?\n\nOpening a database connection has some overhead to it that can't go away \nwithout losing *something* in the process that you want the database to \nhandle. That something usually impacts either security or crash-safety. \nThis is why every serious database product in the world suggests using \nconnection pooling; examples:\n\nhttp://blogs.oracle.com/opal/2006/10/oracle_announces_new_connectio.html\nhttp://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/conn/c0006170.htm\nhttp://msdn.microsoft.com/en-us/library/8xx3tyca.aspx\nhttp://dev.mysql.com/tech-resources/articles/connection_pooling_with_connectorj.html\n\nThe only difference here is that some of the commercial products bundle \nthe connection pooler into the main program. In most cases, you're still \nstuck with configuring a second piece of software, the only difference is \nthat said software might already be installed for you by the big DB \ninstaller. Since this project isn't in the business of bundling every \npiece of additional software that might be useful with the database, it's \nnever going to make it into the core code when it works quite happily \noutside of it. The best you could hope for is that people who bundle \nlarge chunks of other stuff along with their PostgreSQL installer, like \nEnterprise DB does, might include one of the popular poolers one day.\n\nAnd that's how we got to here. There are plenty of PostgreSQL problems \none might run into that there are no usable solutions to, but that other \ndatabase vendors have already solved nicely. From a pragmatic standpoint, \nI'd rather see people work on those, rather than try and forge new ground \non a problem everyone else in the industry has failed to solve.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 29 May 2009 13:37:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/28 Greg Smith <[email protected]>\n\n> On Thu, 28 May 2009, Flavio Henrique Araque Gurgel wrote:\n>\n> It is 2.6.24 We had to apply the kswapd patch also. It's important\n>> specially if you see your system % going as high as 99% in top and loosing\n>> the machine's control. I have read something about 2.6.28 had this patch\n>> accepted in mainstream.\n>>\n>\n> It would help if you gave more specific information about what you're\n> talking about. I know there was a bunch of back and forth on the \"kswapd\n> should only wait on IO if there is IO\" patch, where it was commited and then\n> reverted etc, but it's not clear to me if that's what you're talking\n> about--and if so, what that has to do with the context switch problem.\n>\n> Back to Fabrix's problem. You're fighting a couple of losing battles here.\n> Let's go over the initial list:\n>\n> 1) You have 32 cores. You think they should be allowed to schedule\n>\n>> 3500 active connections across them. That doesn't work, and what happens\n>>\n> is exactly the sort of context switch storm you're showing data for. Think\n> about it for a minute: how many of those can really be doing work at any\n> time? 32, that's how many. Now, you need some multiple of the number of\n> cores to try to make sure everybody is always busy, but that multiple should\n> be closer to 10X the number of cores rather than 100X. You need to adjust\n> the connection pool ratio so that the PostgreSQL max_connections is closer\n> to 500 than 5000, and this is by far the most critical thing for you to do.\n> The PostgreSQL connection handler is known to be bad at handling high\n> connection loads compared to the popular pooling projects, so you really\n> shouldn't throw this problem at it. While kernel problems stack on top of\n> that, you really shouldn't start at kernel fixes; nail the really\n> fundamental and obvious problem first.\n\n\nIn this application is not closing the connection, the development team is\nmakeing the change for close the connection after getting the job done. So\nmost connections are in idle state. How much would this help? Does this\ncould be the real problem?\n\n\n>\n> 2) You have very new hardware and a very old kernel. Once you've done the\n> above, if you're still not happy with performance, at that point you should\n> consider using a newer one. It's fairly simple to build a Linux kernel\n> using the same basic kernel parameters as the stock RedHat one. 2.6.28 is\n> six months old now, is up to 2.6.28.10, and has gotten a lot more testing\n> than most kernels due to it being the Ubuntu 9.04 default. I'd suggest you\n> try out that version.\n\n\nok, I'll test if updating the kernel this improves\n\n\n>\n> 3) A system with 128GB of RAM is in a funny place where by using the\n> defaults or the usual rules of thumb for a lot of parameters (\"set\n> shared_buffers to 1/4 of RAM\") are all bad ideas. shared_buffers seems to\n> top out its usefulness around 10GB on current generation hardware/software,\n> and some Linux memory tunables have defaults on 2.6.18 that are insane for\n> your system; vm_dirty_ratio at 40 comes to mind as the one I run into most.\n> Some of that gets fixed just by moving to a newer kernel, some doesn't.\n> Again, these aren't the problems you're having now though; they're the ones\n> you'll have in the future *if* you fix the more fundamental problems first.\n\n\n\n\n>\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n\n2009/5/28 Greg Smith <[email protected]>\nOn Thu, 28 May 2009, Flavio Henrique Araque Gurgel wrote:\n\n\nIt is 2.6.24 We had to apply the kswapd patch also. It's important specially if you see your system % going as high as 99% in top and loosing the machine's control. I have read something about 2.6.28 had this patch accepted in mainstream.\n\n\nIt would help if you gave more specific information about what you're talking about.  I know there was a bunch of back and forth on the \"kswapd should only wait on IO if there is IO\" patch, where it was commited and then reverted etc, but it's not clear to me if that's what you're talking about--and if so, what that has to do with the context switch problem.\n\nBack to Fabrix's problem.  You're fighting a couple of losing battles here.  Let's go over the initial list:\n\n1) You have 32 cores.  You think they should be allowed to schedule\n\n3500 active connections across them.  That doesn't work, and what happens\n\nis exactly the sort of context switch storm you're showing data for. Think about it for a minute:  how many of those can really be doing work at any time?  32, that's how many.  Now, you need some multiple of the number of cores to try to make sure everybody is always busy, but that multiple should be closer to 10X the number of cores rather than 100X. You need to adjust the connection pool ratio so that the PostgreSQL max_connections is closer to 500 than 5000, and this is by far the most critical thing for you to do.  The PostgreSQL connection handler is known to be bad at handling high connection loads compared to the popular pooling projects, so you really shouldn't throw this problem at it. While kernel problems stack on top of that, you really shouldn't start at kernel fixes; nail the really fundamental and obvious problem first.\nIn this application is not closing the connection, the development team is makeing the change for close the connection after getting the job done. So most connections are in idle state.  How much would this help? Does this could be the real problem?\n\n\n2) You have very new hardware and a very old kernel.  Once you've done the above, if you're still not happy with performance, at that point you should consider using a newer one.  It's fairly simple to build a Linux kernel using the same basic kernel parameters as the stock RedHat one. 2.6.28 is six months old now, is up to 2.6.28.10, and has gotten a lot more testing than most kernels due to it being the Ubuntu 9.04 default. I'd suggest you try out that version.\nok, I'll test if updating the kernel this improves\n\n3) A system with 128GB of RAM is in a funny place where by using the defaults or the usual rules of thumb for a lot of parameters (\"set shared_buffers to 1/4 of RAM\") are all bad ideas.  shared_buffers seems to top out its usefulness around 10GB on current generation hardware/software, and some Linux memory tunables have defaults on 2.6.18 that are insane for your system; vm_dirty_ratio at 40 comes to mind as the one I run into most.  Some of that gets fixed just by moving to a newer kernel, some doesn't.  Again, these aren't the problems you're having now though; they're the ones you'll have in the future *if* you fix the more fundamental problems first.\n \n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD", "msg_date": "Fri, 29 May 2009 11:49:00 -0600", "msg_from": "Fabrix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, 29 May 2009, Fabrix wrote:\n\n> In this application is not closing the connection, the development team \n> is makeing the change for close the connection after getting the job \n> done. So most connections are in idle state.� How much would this help? \n> Does this could be the real problem?\n\nAh, now you're getting somewhere. This is actually another subtle problem \nwith making max_connections really high. It lets developers get away with \nbeing sloppy in ways that waste large amount of server resources.\n\nFix that problem, re-test, and then think about changing other things. \nThere's no need to go through kernel changes and the lot if you can nail \nthe problem at its true source and get acceptable performance.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>From [email protected] Fri May 29 15:20:34 2009\nReceived: from localhost (unknown [200.46.208.211])\n\tby mail.postgresql.org (Postfix) with ESMTP id 63EBD634654\n\tfor <[email protected]>; Fri, 29 May 2009 15:20:34 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.208.211]) (amavisd-maia, port 10024)\n with ESMTP id 08632-08\n for <[email protected]>;\n Fri, 29 May 2009 15:20:28 -0300 (ADT)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.6\nReceived: from exprod7og108.obsmtp.com (exprod7og108.obsmtp.com [64.18.2.169])\n\tby mail.postgresql.org (Postfix) with SMTP id 9A950634652\n\tfor <[email protected]>; Fri, 29 May 2009 15:20:30 -0300 (ADT)\nReceived: from source ([209.85.222.202]) by exprod7ob108.postini.com ([64.18.6.12]) with SMTP\n\tID [email protected]; Fri, 29 May 2009 11:20:30 PDT\nReceived: by pzk40 with SMTP id 40so5537265pzk.22\n for <[email protected]>; Fri, 29 May 2009 11:20:29 -0700 (PDT)\nMIME-Version: 1.0\nReceived: by 10.142.43.7 with SMTP id q7mr902169wfq.160.1243621228997; Fri, 29 \n\tMay 2009 11:20:28 -0700 (PDT)\nIn-Reply-To: <[email protected]>\nReferences: <10004459.141243560273700.JavaMail.flavio@presente>\n\t <[email protected]>\n\t <[email protected]>\n\t <[email protected]>\nDate: Fri, 29 May 2009 14:20:28 -0400\nMessage-ID: <[email protected]>\nSubject: Re: Scalability in postgres\nFrom: Scott Mead <[email protected]>\nTo: Greg Smith <[email protected]>\nCc: =?ISO-8859-2?Q?Grzegorz_Ja=B6kiewicz?= <[email protected]>, \n\tFlavio Henrique Araque Gurgel <[email protected]>, Fabrix <[email protected]>, \n\tpgsql-performance <[email protected]>\nContent-Type: multipart/alternative; boundary=000e0cd3047096ed9b046b1121ac\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Archive-Number: 200905/431\nX-Sequence-Number: 34218\n\n--000e0cd3047096ed9b046b1121ac\nContent-Type: text/plain; charset=ISO-8859-1\nContent-Transfer-Encoding: 7bit\n\n2009/5/29 Greg Smith <[email protected]>\n\n> On Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:\n>\n> if it is implemented somewhere else better, shouldn't that make it\n>> obvious that postgresql should solve it internally ?\n>>\n>\n> Opening a database connection has some overhead to it that can't go away\n> without losing *something* in the process that you want the database to\n> handle. That something usually impacts either security or crash-safety.\n> This is why every serious database product in the world suggests using\n> connection pooling; examples:\n>\n> http://blogs.oracle.com/opal/2006/10/oracle_announces_new_connectio.html\n>\n> http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/conn/c0006170.htm\n> http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx\n>\n> http://dev.mysql.com/tech-resources/articles/connection_pooling_with_connectorj.html\n>\n\n\n\n Exactly, here's the thing, if you have an open transaction somewhere to\nthe system, there may be a REALLY good reason for it. If you're app or dev\nteam is keeping those open, it's very possible that 'reaping' them is going\nto cause some kind of data integrity issue in your database. I would\ninvestigate the application and make sure that everything is actually\nrolling back or commiting. If you're using an ORM, make sure that it's\nusing autocommit, this usually makes the issue go away.\n As to the context switching point -- A connection pooler is what you need.\n Why make your database server dedicate cycles to having to figure out who\ngets on the CPU next? Why not lower the number of connections, and let a\nconnection pool decide what to use. That usually helps with your open\ntransactions too (if they indeed are just abandoned by the application).\n\n\n\n>\n> The only difference here is that some of the commercial products bundle the\n> connection pooler into the main program. In most cases, you're still stuck\n> with configuring a second piece of software, the only difference is that\n> said software might already be installed for you by the big DB installer.\n> Since this project isn't in the business of bundling every piece of\n> additional software that might be useful with the database, it's never going\n> to make it into the core code when it works quite happily outside of it.\n> The best you could hope for is that people who bundle large chunks of other\n> stuff along with their PostgreSQL installer, like Enterprise DB does, might\n> include one of the popular poolers one day.\n\n\n This sounds like a dirty plug (sorry sorry sorry, it's for informative\npurposes only)...\n\nOpen Source:\n\n One-Click installers : No connection pool bundled (will be\nincluded in 8.4 one-click installers)\n PostgresPlus Standard Edition : pgBouncer is bundled\n\nProprietary:\n\n PostgresPlus Advanced Server: pgBouncer is bundled\n\n That being said, the well known connection pools for postgres are pretty\nsmall and easy to download / build / configure and get up and running.\n\nhttps://developer.skype.com/SkypeGarage/DbProjects/PgBouncer\nhttp://pgfoundry.org/projects/pgpool/\n\n--Scott\n\n--000e0cd3047096ed9b046b1121ac\nContent-Type: text/html; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\n\n<div class=3D\"gmail_quote\">2009/5/29 Greg Smith <span dir=3D\"ltr\">&lt;<a hr=\nef=3D\"mailto:[email protected]\">[email protected]</a>&gt;</span><br><=\nblockquote class=3D\"gmail_quote\" style=3D\"margin:0 0 0 .8ex;border-left:1px=\n #ccc solid;padding-left:1ex;\">\n<div class=3D\"im\">On Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:<br>\n<br>\n<blockquote class=3D\"gmail_quote\" style=3D\"margin:0 0 0 .8ex;border-left:1p=\nx #ccc solid;padding-left:1ex\">\nif it is implemented somewhere else better, shouldn&#39;t that make it<br>\nobvious that postgresql should solve it internally ?<br>\n</blockquote>\n<br></div>\nOpening a database connection has some overhead to it that can&#39;t go awa=\ny without losing *something* in the process that you want the database to h=\nandle. =A0That something usually impacts either security or crash-safety. T=\nhis is why every serious database product in the world suggests using conne=\nction pooling; examples:<br>\n\n<br>\n<a href=3D\"http://blogs.oracle.com/opal/2006/10/oracle_announces_new_connec=\ntio.html\" target=3D\"_blank\">http://blogs.oracle.com/opal/2006/10/oracle_ann=\nounces_new_connectio.html</a><br>\n<a href=3D\"http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?top=\nic=3D/com.ibm.db2.udb.doc/conn/c0006170.htm\" target=3D\"_blank\">http://publi=\nb.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=3D/com.ibm.db2.udb.d=\noc/conn/c0006170.htm</a><br>\n\n<a href=3D\"http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx\" target=3D=\n\"_blank\">http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx</a><br>\n<a href=3D\"http://dev.mysql.com/tech-resources/articles/connection_pooling_=\nwith_connectorj.html\" target=3D\"_blank\">http://dev.mysql.com/tech-resources=\n/articles/connection_pooling_with_connectorj.html</a><br></blockquote><div>\n<br></div><div><br></div><div><br></div><span class=3D\"Apple-style-span\" st=\nyle=3D\"border-collapse: collapse; color: rgb(68, 68, 68); \"><div>=A0=A0Exac=\ntly, here&#39;s the thing, if you have an open transaction somewhere to the=\n system, there may be a REALLY good reason for it. =A0If you&#39;re app or =\ndev team is keeping those open, it&#39;s very possible that &#39;reaping&#3=\n9; them is going to cause some kind of data integrity issue in your databas=\ne. =A0I would investigate the application and make sure that everything is =\nactually rolling back or commiting. =A0If you&#39;re using an ORM, make sur=\ne that it&#39;s using autocommit, this usually makes the issue go away.</di=\nv>\n</span><div><span class=3D\"Apple-style-span\" style=3D\"border-collapse: coll=\napse; color: rgb(68, 68, 68); \">=A0As to the context switching point -- A c=\nonnection pooler is what you need. =A0Why make your database server dedicat=\ne cycles to having to figure out who gets on the CPU next? =A0Why not lower=\n the number of connections, and let a connection pool decide what to use. =\n=A0That usually helps with your open transactions too (if they indeed are j=\nust abandoned by the application). =A0</span></div>\n<div><br></div><div>=A0</div><blockquote class=3D\"gmail_quote\" style=3D\"mar=\ngin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;\">\n<br>\nThe only difference here is that some of the commercial products bundle the=\n connection pooler into the main program. =A0In most cases, you&#39;re stil=\nl stuck with configuring a second piece of software, the only difference is=\n that said software might already be installed for you by the big DB instal=\nler. Since this project isn&#39;t in the business of bundling every piece o=\nf additional software that might be useful with the database, it&#39;s neve=\nr going to make it into the core code when it works quite happily outside o=\nf it. =A0The best you could hope for is that people who bundle large chunks=\n of other stuff along with their PostgreSQL installer, like Enterprise DB d=\noes, might include one of the popular poolers one day.</blockquote>\n<div><br></div><span class=3D\"Apple-style-span\" style=3D\"border-collapse: c=\nollapse; color: rgb(68, 68, 68); \"><div>=A0This sounds like a dirty plug (s=\norry sorry sorry, it&#39;s for informative purposes only)...=A0</div><div><=\nbr></div>\n<div>Open Source:</div><div>=A0=A0</div><div>=A0=A0 =A0 =A0One-Click instal=\nlers : =A0 =A0No connection pool bundled =A0(will be included in 8.4 one-cl=\nick installers)</div><div>=A0=A0 =A0 =A0PostgresPlus Standard Edition : =A0=\npgBouncer is bundled</div>\n<div><br></div><div>Proprietary:</div><div><br></div><div>=A0=A0 =A0 =A0Pos=\ntgresPlus Advanced Server: pgBouncer is bundled</div><div><br></div><div>=\n=A0=A0That being said, the well known connection pools for postgres are pre=\ntty small and easy to download / build / configure and get up and running.<=\n/div>\n<div><br></div><div><a href=3D\"https://developer.skype.com/SkypeGarage/DbPr=\nojects/PgBouncer\" target=3D\"_blank\" style=3D\"color: rgb(34, 34, 34); \">http=\ns://developer.skype.com/SkypeGarage/DbProjects/PgBouncer</a></div></span><d=\niv>\n<span class=3D\"Apple-style-span\" style=3D\"border-collapse: collapse; color:=\n rgb(68, 68, 68); \"><a href=3D\"http://pgfoundry.org/projects/pgpool/\" targe=\nt=3D\"_blank\" style=3D\"color: rgb(34, 34, 34); \">http://pgfoundry.org/projec=\nts/pgpool/</a></span>=A0</div>\n<div><br></div><div>--Scott</div></div><br>\n\n--000e0cd3047096ed9b046b1121ac--\n", "msg_date": "Fri, 29 May 2009 14:17:48 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, May 29, 2009 at 12:20 PM, Scott Mead\n<[email protected]> wrote:\n>  This sounds like a dirty plug (sorry sorry sorry, it's for informative\n> purposes only)...\n\n(Commercial applications mentioned deleted for brevity.)\n\nJust sounded like useful information to me. I'm not anti commercial,\njust anti-marketing speak. :) Like a lot of folks on this list\nreally.\n", "msg_date": "Fri, 29 May 2009 12:34:51 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Scott Mead <[email protected]>\n\n> 2009/5/29 Greg Smith <[email protected]>\n>\n>> On Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:\n>>\n>> if it is implemented somewhere else better, shouldn't that make it\n>>> obvious that postgresql should solve it internally ?\n>>>\n>>\n>> Opening a database connection has some overhead to it that can't go away\n>> without losing *something* in the process that you want the database to\n>> handle. That something usually impacts either security or crash-safety.\n>> This is why every serious database product in the world suggests using\n>> connection pooling; examples:\n>>\n>> http://blogs.oracle.com/opal/2006/10/oracle_announces_new_connectio.html\n>>\n>> http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/conn/c0006170.htm\n>> http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx\n>>\n>> http://dev.mysql.com/tech-resources/articles/connection_pooling_with_connectorj.html\n>>\n>\n>\n>\n> Exactly, here's the thing, if you have an open transaction somewhere to\n> the system, there may be a REALLY good reason for it. If you're app or dev\n> team is keeping those open, it's very possible that 'reaping' them is going\n> to cause some kind of data integrity issue in your database. I would\n> investigate the application and make sure that everything is actually\n> rolling back or commiting. If you're using an ORM, make sure that it's\n> using autocommit, this usually makes the issue go away.\n> As to the context switching point -- A connection pooler is what you need.\n> Why make your database server dedicate cycles to having to figure out who\n> gets on the CPU next? Why not lower the number of connections, and let a\n> connection pool decide what to use. That usually helps with your open\n> transactions too (if they indeed are just abandoned by the application).\n>\n>\n>\n>>\n>> The only difference here is that some of the commercial products bundle\n>> the connection pooler into the main program. In most cases, you're still\n>> stuck with configuring a second piece of software, the only difference is\n>> that said software might already be installed for you by the big DB\n>> installer. Since this project isn't in the business of bundling every piece\n>> of additional software that might be useful with the database, it's never\n>> going to make it into the core code when it works quite happily outside of\n>> it. The best you could hope for is that people who bundle large chunks of\n>> other stuff along with their PostgreSQL installer, like Enterprise DB does,\n>> might include one of the popular poolers one day.\n>>\n>\n> This sounds like a dirty plug (sorry sorry sorry, it's for informative\n> purposes only)...\n>\n> Open Source:\n>\n> One-Click installers : No connection pool bundled (will be\n> included in 8.4 one-click installers)\n> PostgresPlus Standard Edition : pgBouncer is bundled\n>\n> Proprietary:\n>\n> PostgresPlus Advanced Server: pgBouncer is bundled\n>\n> That being said, the well known connection pools for postgres are pretty\n> small and easy to download / build / configure and get up and running.\n>\n> https://developer.skype.com/SkypeGarage/DbProjects/PgBouncer\n> http://pgfoundry.org/projects/pgpool/\n>\n\nWhich is better and more complete, which have more features?\nWhat you recommend? pgbouncer or pgpool?\n\n>\n> --Scott\n>\n>\n\n2009/5/29 Scott Mead <[email protected]>\n2009/5/29 Greg Smith <[email protected]>\nOn Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:\n\n\nif it is implemented somewhere else better, shouldn't that make it\nobvious that postgresql should solve it internally ?\n\n\nOpening a database connection has some overhead to it that can't go away without losing *something* in the process that you want the database to handle.  That something usually impacts either security or crash-safety. This is why every serious database product in the world suggests using connection pooling; examples:\n\nhttp://blogs.oracle.com/opal/2006/10/oracle_announces_new_connectio.html\nhttp://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/conn/c0006170.htm\nhttp://msdn.microsoft.com/en-us/library/8xx3tyca.aspx\nhttp://dev.mysql.com/tech-resources/articles/connection_pooling_with_connectorj.html\n  Exactly, here's the thing, if you have an open transaction somewhere to the system, there may be a REALLY good reason for it.  If you're app or dev team is keeping those open, it's very possible that 'reaping' them is going to cause some kind of data integrity issue in your database.  I would investigate the application and make sure that everything is actually rolling back or commiting.  If you're using an ORM, make sure that it's using autocommit, this usually makes the issue go away.\n As to the context switching point -- A connection pooler is what you need.  Why make your database server dedicate cycles to having to figure out who gets on the CPU next?  Why not lower the number of connections, and let a connection pool decide what to use.  That usually helps with your open transactions too (if they indeed are just abandoned by the application).  \n \n\nThe only difference here is that some of the commercial products bundle the connection pooler into the main program.  In most cases, you're still stuck with configuring a second piece of software, the only difference is that said software might already be installed for you by the big DB installer. Since this project isn't in the business of bundling every piece of additional software that might be useful with the database, it's never going to make it into the core code when it works quite happily outside of it.  The best you could hope for is that people who bundle large chunks of other stuff along with their PostgreSQL installer, like Enterprise DB does, might include one of the popular poolers one day.\n\n This sounds like a dirty plug (sorry sorry sorry, it's for informative purposes only)... \nOpen Source:        One-Click installers :    No connection pool bundled  (will be included in 8.4 one-click installers)      PostgresPlus Standard Edition :  pgBouncer is bundled\nProprietary:      PostgresPlus Advanced Server: pgBouncer is bundled  That being said, the well known connection pools for postgres are pretty small and easy to download / build / configure and get up and running.\nhttps://developer.skype.com/SkypeGarage/DbProjects/PgBouncer\nhttp://pgfoundry.org/projects/pgpool/ \nWhich is better and more complete, which have more features? What you recommend? pgbouncer or pgpool? \n\n--Scott", "msg_date": "Fri, 29 May 2009 13:45:48 -0600", "msg_from": "Fabrix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, May 29, 2009 at 3:45 PM, Fabrix <[email protected]> wrote:\n\n>\n> Which is better and more complete, which have more features?\n> What you recommend? pgbouncer or pgpool?\n>\n>>\n In your case, where you're looking to just get the connection overhead\noff of the machine, pgBouncer is probably going to be more efficient. It's\nsmall and very lightweight, and you don't have to worry about a lot of extra\nfeatures. It is a '... to the wall' connection pool.\n\n pgPool is definitely more feature-full, but honestly, you probably don't\nneed the ability (at least now) to balance selects / against replicated\nservers, or have the pooler do a write to multiple servers for H/A. Both\nthese things would take more time to implement.\n\npgPool is real an all-around H/A / scalability architecture e decision\nwhereas pgBouncer is a nice, lightweight and quick way to:\n\n *) Lower the number of connections to the dbserver\n *) Avoid connect / disconnect overhead\n\n--Scott\n\nOn Fri, May 29, 2009 at 3:45 PM, Fabrix <[email protected]> wrote:\nWhich is better and more complete, which have more features? What you recommend? pgbouncer or pgpool? \n\n   In your case, where you're looking to just get the connection overhead off of the machine, pgBouncer is probably going to be more efficient.  It's small and very lightweight, and you don't have to worry about a lot of extra features.  It is a '... to the wall' connection pool.\n   pgPool is definitely more feature-full, but honestly, you probably don't need the ability (at least now) to balance selects / against replicated servers, or have the pooler do a write to multiple servers for H/A.  Both these things would take more time to implement.  \npgPool is real an all-around H/A / scalability architecture e decision whereas pgBouncer is a nice, lightweight and quick way to:   *) Lower the number of connections to the dbserver\n   *) Avoid connect / disconnect overhead--Scott", "msg_date": "Fri, 29 May 2009 15:50:48 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "\nOn 5/28/09 6:54 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> 2) You have very new hardware and a very old kernel. Once you've done the\n> above, if you're still not happy with performance, at that point you\n> should consider using a newer one. It's fairly simple to build a Linux\n> kernel using the same basic kernel parameters as the stock RedHat one.\n> 2.6.28 is six months old now, is up to 2.6.28.10, and has gotten a lot\n> more testing than most kernels due to it being the Ubuntu 9.04 default.\n> I'd suggest you try out that version.\n\n\nComparing RedHat's 2.6.18, heavily patched, fix backported kernel to the\noriginal 2.6.18 is really hard. Yes, much of it is old, but a lot of stuff\nhas been backported.\nI have no idea if things related to this case have been backported. Virtual\nmemory management is complex and only bug fixes would likely go in however.\nBut RedHat 5.3 for example put all the new features for Intel's latest\nprocessor in the release (which may not even be in 2.6.28!).\n\nThere are operations/IT people won't touch Ubuntu etc with a ten foot pole\nyet for production. That may be irrational, but such paranoia exists. The\nlatest postgres release is generally a hell of a lot safer than the latest\nlinux kernel, and people get paranoid about their DB.\n\nIf you told someone who has to wake up at 3AM by page if the system has an\nerror that \"oh, we patched our own kenrel build into the RedHat OS\" they\nmight not be ok with that.\n\nIts a good test to see if this problem is fixed in the kernel. I've seen\nCentOS 5.2 go completely nuts with system CPU time and context switches with\nkswapd many times before. I haven't put the system under the same stress\nwith 5.3 yet however.\n\n", "msg_date": "Fri, 29 May 2009 17:48:45 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, 29 May 2009, Scott Carey wrote:\n\n> There are operations/IT people won't touch Ubuntu etc with a ten foot pole\n> yet for production.\n\nThe only thing I was suggesting is that because 2.6.28 is the latest \nUbuntu kernel, that means it's gotten a lot more exposure and testing \nthan, say, other options like 2.6.27 or 2.6.29.\n\nI build a fair number of RedHat/CentOS systems with an upgraded kernel \nbased on mature releases from kernel.org, and a config as close as \npossible to the original RedHat one, with the generic kernel defaults for \nall the new settings. I keep liking that combination better than just \nusing an Ubuntu version with a newer kernel. I've seen a couple of odd \nkernel setting choices in Ubuntu releases before that motivate that \nchoice; the scheduler trainwreck described at \nhttps://bugs.launchpad.net/ubuntu/+source/linux/+bug/188226 comes to mind.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 30 May 2009 23:41:53 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Sat, May 30, 2009 at 9:41 PM, Greg Smith <[email protected]> wrote:\n> On Fri, 29 May 2009, Scott Carey wrote:\n>\n>> There are operations/IT people won't touch Ubuntu etc with a ten foot pole\n>> yet for production.\n>\n> The only thing I was suggesting is that because 2.6.28 is the latest Ubuntu\n> kernel, that means it's gotten a lot more exposure and testing than, say,\n> other options like 2.6.27 or 2.6.29.\n>\n> I build a fair number of RedHat/CentOS systems with an upgraded kernel based\n> on mature releases from kernel.org, and a config as close as possible to the\n> original RedHat one, with the generic kernel defaults for all the new\n> settings.  I keep liking that combination better than just using an Ubuntu\n> version with a newer kernel.  I've seen a couple of odd kernel setting\n> choices in Ubuntu releases before that motivate that choice; the scheduler\n> trainwreck described at\n> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/188226 comes to mind.\n\n8.04 was a frakking train wreck in many ways. It wasn't until 8.04.2\ncame out that it was even close to useable as a server OS, and even\nthen, not for databases yet. It's still got broken bits and pieces\nmarked \"fixed in 8.10\"... Uh, hello, it's your LTS release, fixes\nshould be made there as a priority. There's a reason my dbs run on\nCentos / RHEL. It's not the fastest release ever, but it doesn't go\ndown on me and it just works.\n", "msg_date": "Sat, 30 May 2009 23:20:42 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "2009/5/29 Scott Carey <[email protected]>\n\n>\n> On 5/28/09 6:54 PM, \"Greg Smith\" <[email protected]> wrote:\n>\n> > 2) You have very new hardware and a very old kernel. Once you've done\n> the\n> > above, if you're still not happy with performance, at that point you\n> > should consider using a newer one. It's fairly simple to build a Linux\n> > kernel using the same basic kernel parameters as the stock RedHat one.\n> > 2.6.28 is six months old now, is up to 2.6.28.10, and has gotten a lot\n> > more testing than most kernels due to it being the Ubuntu 9.04 default.\n> > I'd suggest you try out that version.\n>\n>\n> Comparing RedHat's 2.6.18, heavily patched, fix backported kernel to the\n> original 2.6.18 is really hard. Yes, much of it is old, but a lot of stuff\n> has been backported.\n> I have no idea if things related to this case have been backported.\n> Virtual\n> memory management is complex and only bug fixes would likely go in however.\n> But RedHat 5.3 for example put all the new features for Intel's latest\n> processor in the release (which may not even be in 2.6.28!).\n>\n> There are operations/IT people won't touch Ubuntu etc with a ten foot pole\n> yet for production. That may be irrational, but such paranoia exists. The\n> latest postgres release is generally a hell of a lot safer than the latest\n> linux kernel, and people get paranoid about their DB.\n>\n> If you told someone who has to wake up at 3AM by page if the system has an\n> error that \"oh, we patched our own kenrel build into the RedHat OS\" they\n> might not be ok with that.\n>\n> Its a good test to see if this problem is fixed in the kernel. I've seen\n> CentOS 5.2 go completely nuts with system CPU time and context switches\n> with\n> kswapd many times before. I haven't put the system under the same stress\n> with 5.3 yet however.\n>\n\nOne of the server is: Intel Xeon X7350 2.93GHz, RH 5.3 and kernel\n2.6.18-128.el5.\nand the perfonmace is bad too, so i don't think the probles is the kernel\n\nThe two servers that I tested (HP-785 Opteron and IBM x3950 M2 Xeon) have\nNUMA architecture. and I thought the problem was caused by NUMA.\n\nhttp://archives.postgresql.org/pgsql-admin/2008-11/msg00157.php\n\nI'm trying another server, an HP blade bl 680 with Xeon E7450 (4 CPU x 6\ncores= 24 cores) without NUMA architecture, but the CPUs are also going up.\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 1 0 0 46949972 116908 17032964 0 0 15 31 2 2 1 0\n98 0 0\n 2 0 0 46945880 116916 17033068 0 0 72 140 2059 3140 1 1\n97 0 0\n329 0 0 46953260 116932 17033208 0 0 24 612 1435 194237 44\n3 53 0 0\n546 0 0 46952912 116940 17033208 0 0 4 136 1090 327047 96\n4 0 0 0\n562 0 0 46951052 116940 17033224 0 0 0 0 1095 323034 95\n4 0 0 0\n514 0 0 46949200 116952 17033212 0 0 0 224 1088 330178 96\n3 1 0 0\n234 0 0 46948456 116952 17033212 0 0 0 0 1106 315359 91\n5 4 0 0\n 4 0 0 46958376 116968 17033272 0 0 16 396 1379 223499 47\n3 49 0 0\n 1 1 0 46941644 116976 17033224 0 0 152 1140 2662 5540 4 2\n93 1 0\n 1 0 0 46943196 116984 17033248 0 0 104 604 2307 3992 4 2\n94 0 0\n 1 1 0 46931544 116996 17033568 0 0 104 4304 2318 3585 1 1\n97 1 0\n 0 0 0 46943572 117004 17033568 0 0 32 204 2007 2986 1 1\n98 0 0\n\n\n Now i don't think the probles is NUMA.\n\n\nThe developer team will fix de aplication and then i will test again.\n\nI believe that when the application closes the connection the problem could\nbe solved, and then 16 cores in a server does the work instead of a 32 or\n24.\n\n\nRegards...\n\n--Fabrix\n\n2009/5/29 Scott Carey <[email protected]>\n\nOn 5/28/09 6:54 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> 2) You have very new hardware and a very old kernel.  Once you've done the\n> above, if you're still not happy with performance, at that point you\n> should consider using a newer one.  It's fairly simple to build a Linux\n> kernel using the same basic kernel parameters as the stock RedHat one.\n> 2.6.28 is six months old now, is up to 2.6.28.10, and has gotten a lot\n> more testing than most kernels due to it being the Ubuntu 9.04 default.\n> I'd suggest you try out that version.\n\n\nComparing RedHat's 2.6.18, heavily patched, fix backported kernel to the\noriginal 2.6.18 is really hard.  Yes, much of it is old, but a lot of stuff\nhas been backported.\nI have no idea if things related to this case have been backported.  Virtual\nmemory management is complex and only bug fixes would likely go in however.\nBut RedHat 5.3 for example put all the new features for Intel's latest\nprocessor in the release (which may not even be in 2.6.28!).\n\nThere are operations/IT people won't touch Ubuntu etc with a ten foot pole\nyet for production.  That may be irrational, but such paranoia exists.  The\nlatest postgres release is generally a hell of a lot safer than the latest\nlinux kernel, and people get paranoid about their DB.\n\nIf you told someone who has to wake up at 3AM by page if the system has an\nerror that \"oh, we patched our own kenrel build into the RedHat OS\" they\nmight not be ok with that.\n\nIts a good test to see if this problem is fixed in the kernel. I've seen\nCentOS 5.2 go completely nuts with system CPU time and context switches with\nkswapd many times before.  I haven't put the system under the same stress\nwith 5.3 yet however.\nOne of the server is: Intel Xeon X7350 2.93GHz, RH 5.3 and kernel 2.6.18-128.el5.and the perfonmace is bad too, so i don't  think the probles is the kernel The two servers that I tested (HP-785 Opteron and IBM x3950 M2 Xeon) have NUMA architecture. and I thought the problem was caused by NUMA.\nhttp://archives.postgresql.org/pgsql-admin/2008-11/msg00157.phpI'm trying another server, an HP blade bl 680 with Xeon E7450 (4 CPU x 6 cores= 24 cores) without NUMA architecture, but the CPUs are also going up. \nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st 1  0      0 46949972 116908 17032964    0    0    15    31    2    2  1  0 98  0  0\n 2  0      0 46945880 116916 17033068    0    0    72   140 2059 3140  1  1 97  0  0329  0      0 46953260 116932 17033208    0    0    24   612 1435 194237 44  3 53  0  0546  0      0 46952912 116940 17033208    0    0     4   136 1090 327047 96  4  0  0  0\n562  0      0 46951052 116940 17033224    0    0     0     0 1095 323034 95  4  0  0  0514  0      0 46949200 116952 17033212    0    0     0   224 1088 330178 96  3  1  0  0234  0      0 46948456 116952 17033212    0    0     0     0 1106 315359 91  5  4  0  0\n 4  0      0 46958376 116968 17033272    0    0    16   396 1379 223499 47  3 49  0  0 1  1      0 46941644 116976 17033224    0    0   152  1140 2662 5540  4  2 93  1  0 1  0      0 46943196 116984 17033248    0    0   104   604 2307 3992  4  2 94  0  0\n 1  1      0 46931544 116996 17033568    0    0   104  4304 2318 3585  1  1 97  1  0 0  0      0 46943572 117004 17033568    0    0    32   204 2007 2986  1  1 98  0  0 Now i don't  think the probles is NUMA. \nThe developer team will fix de aplication  and then i will test again.I believe that when the application closes the connection the problem\ncould be solved, and then 16 cores in a server does the work instead of\na 32 or 24.Regards...--Fabrix", "msg_date": "Sun, 31 May 2009 10:37:33 -0600", "msg_from": "Fabrix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "\nOn 5/31/09 9:37 AM, \"Fabrix\" <[email protected]> wrote:\n\n> \n> \n> 2009/5/29 Scott Carey <[email protected]>\n>> \n>> On 5/28/09 6:54 PM, \"Greg Smith\" <[email protected]> wrote:\n>> \n>>> 2) You have very new hardware and a very old kernel.  Once you've done the\n>>> above, if you're still not happy with performance, at that point you\n>>> should consider using a newer one.  It's fairly simple to build a Linux\n>>> kernel using the same basic kernel parameters as the stock RedHat one.\n>>> 2.6.28 is six months old now, is up to 2.6.28.10, and has gotten a lot\n>>> more testing than most kernels due to it being the Ubuntu 9.04 default.\n>>> I'd suggest you try out that version.\n>> \n>> \n>> Comparing RedHat's 2.6.18, heavily patched, fix backported kernel to the\n>> original 2.6.18 is really hard.  Yes, much of it is old, but a lot of stuff\n>> has been backported.\n>> I have no idea if things related to this case have been backported.  Virtual\n>> memory management is complex and only bug fixes would likely go in however.\n>> But RedHat 5.3 for example put all the new features for Intel's latest\n>> processor in the release (which may not even be in 2.6.28!).\n>> \n>> There are operations/IT people won't touch Ubuntu etc with a ten foot pole\n>> yet for production.  That may be irrational, but such paranoia exists.  The\n>> latest postgres release is generally a hell of a lot safer than the latest\n>> linux kernel, and people get paranoid about their DB.\n>> \n>> If you told someone who has to wake up at 3AM by page if the system has an\n>> error that \"oh, we patched our own kenrel build into the RedHat OS\" they\n>> might not be ok with that.\n>> \n>> Its a good test to see if this problem is fixed in the kernel. I've seen\n>> CentOS 5.2 go completely nuts with system CPU time and context switches with\n>> kswapd many times before.  I haven't put the system under the same stress\n>> with 5.3 yet however.\n> \n> One of the server is: Intel Xeon X7350 2.93GHz, RH 5.3 and kernel\n> 2.6.18-128.el5.\n> and the perfonmace is bad too, so i don't  think the probles is the kernel\n> \n> The two servers that I tested (HP-785 Opteron and IBM x3950 M2 Xeon) have NUMA\n> architecture. and I thought the problem was caused by NUMA.\n> \n> http://archives.postgresql.org/pgsql-admin/2008-11/msg00157.php\n> \n> I'm trying another server, an HP blade bl 680 with Xeon E7450 (4 CPU x 6\n> cores= 24 cores) without NUMA architecture, but the CPUs are also going up.\n> \n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa\n> st\n>  1  0      0 46949972 116908 17032964    0    0    15    31    2    2  1  0\n> 98  0  0\n>  2  0      0 46945880 116916 17033068    0    0    72   140 2059 3140  1  1\n> 97  0  0\n> 329  0      0 46953260 116932 17033208    0    0    24   612 1435 194237 44  3\n> 53  0  0\n> 546  0      0 46952912 116940 17033208    0    0     4   136 1090 327047 96 \n> 4  0  0  0\n> 562  0      0 46951052 116940 17033224    0    0     0     0 1095 323034 95 \n> 4  0  0  0\n> 514  0      0 46949200 116952 17033212    0    0     0   224 1088 330178 96 \n> 3  1  0  0\n> 234  0      0 46948456 116952 17033212    0    0     0     0 1106 315359 91 \n> 5  4  0  0\n>  4  0      0 46958376 116968 17033272    0    0    16   396 1379 223499 47  3\n> 49  0  0\n>  1  1      0 46941644 116976 17033224    0    0   152  1140 2662 5540  4  2\n> 93  1  0\n>  1  0      0 46943196 116984 17033248    0    0   104   604 2307 3992  4  2\n> 94  0  0\n>  1  1      0 46931544 116996 17033568    0    0   104  4304 2318 3585  1  1\n> 97  1  0\n>  0  0      0 46943572 117004 17033568    0    0    32   204 2007 2986  1  1\n> 98  0  0\n> \n> \n>  Now i don't  think the probles is NUMA.\n> \n> \n> The developer team will fix de aplication  and then i will test again.\n> \n> I believe that when the application closes the connection the problem could be\n> solved, and then 16 cores in a server does the work instead of a 32 or 24.\n\nHidden in the above data is that the context switch craziness is not\ncorrelated with system CPU time, but user CPU time -- so this is not likely\nrelated to the kswapd context switch stuff which is associated with high\nsystem CPU use. \n\nIts probably locking in Postgres.\n\n\n> \n> \n> Regards...\n> \n> --Fabrix\n> \n> \n> \n\n", "msg_date": "Mon, 1 Jun 2009 12:19:44 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Grzegorz Jaśkiewicz wrote:\n> \n> I thought that's where the difference is between postgresql and oracle\n> mostly, ability to handle more transactions and better scalability .\n> \n\nWhich were you suggesting had this \"better scalability\"?\n\nI recall someone summarizing to a CFO where I used to work:\n\"Oracle may scale technologically, but it doesn't scale well financially.\"\n\n\n", "msg_date": "Mon, 01 Jun 2009 17:42:19 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Sat, 30 May 2009, Scott Marlowe wrote:\n\n> 8.04 was a frakking train wreck in many ways. It wasn't until 8.04.2\n> came out that it was even close to useable as a server OS, and even\n> then, not for databases yet. It's still got broken bits and pieces\n> marked \"fixed in 8.10\"... Uh, hello, it's your LTS release, fixes\n> should be made there as a priority.\n\nUbuntu doesn't really have LTS releases, they just have ones they claim \nare supported longer than others. But as you've also noticed, they really \naren't. All they really offer is long-term critical security fixes for \nthose releases, that's it. The longest I've ever gotten an Ubuntu box to \nlast before becoming overwhelmed by bugs that were only fixed in later \nversions and not backported was around 2 years.\n\n...but now we're wondering way off topic.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 2 Jun 2009 00:23:04 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Greg Smith wrote:\n>> 3500 active connections across them. That doesn't work, and what \n>> happens\n> is exactly the sort of context switch storm you're showing data for. \n> Think about it for a minute: how many of those can really be doing \n> work at any time? 32, that's how many. Now, you need some multiple \n> of the number of cores to try to make sure everybody is always busy, \n> but that multiple should be closer to 10X the number of cores rather \n> than 100X. \nThat's surely overly simplistic. There is inherently nothing problematic\nabout having a lot of compute processes waiting for their timeslice, nor\nof having IO- or semaphore-blocked processes waiting, and it doesn't\ncause a context switch storm - this is a problem with postgres scalability,\nnot (inherently) lots of connections. I'm sure most of us evaluating \nPostgres\nfrom a background in Sybase or SQLServer would regard 5000\nconnections as no big deal.\n\nThis has the sniff of a badly contended spin-and-yield doesn't it?\n\nI'd guess that if the yield were a sleep for a couple of milliseconds then\nthe lock holder would run an free everything up.\n\n", "msg_date": "Wed, 03 Jun 2009 08:32:16 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "James Mansion <[email protected]> wrote: \n \n> I'm sure most of us evaluating Postgres from a background in Sybase\n> or SQLServer would regard 5000 connections as no big deal.\n \nSure, but the architecture of those products is based around all the\nwork being done by \"engines\" which try to establish affinity to\ndifferent CPUs, and loop through the various tasks to be done. You\ndon't get a context switch storm because you normally have the number\nof engines set at or below the number of CPUs. The down side is that\nthey spend a lot of time spinning around queue access to see if\nanything has become available to do -- which causes them not to play\nnice with other processes on the same box.\n \nIf you do connection pooling and queue requests, you get the best of\nboth worlds. If that could be built into PostgreSQL, it would\nprobably reduce the number of posts requesting support for bad\nconfigurations, and help with benchmarks which don't use proper\nconnection pooling for the product; but it would actually not add any\ncapability which isn't there if you do your homework....\n \n-Kevin\n", "msg_date": "Wed, 03 Jun 2009 09:09:04 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Few weeks ago tested a customer application on 16 cores with Oracle:\n - 20,000 sessions in total\n - 70,000 queries/sec\n\nwithout any problem on a mid-range Sun box + Solaris 10..\n\nRgds,\n-Dimitri\n\nOn 6/3/09, Kevin Grittner <[email protected]> wrote:\n> James Mansion <[email protected]> wrote:\n>\n>> I'm sure most of us evaluating Postgres from a background in Sybase\n>> or SQLServer would regard 5000 connections as no big deal.\n>\n> Sure, but the architecture of those products is based around all the\n> work being done by \"engines\" which try to establish affinity to\n> different CPUs, and loop through the various tasks to be done. You\n> don't get a context switch storm because you normally have the number\n> of engines set at or below the number of CPUs. The down side is that\n> they spend a lot of time spinning around queue access to see if\n> anything has become available to do -- which causes them not to play\n> nice with other processes on the same box.\n>\n> If you do connection pooling and queue requests, you get the best of\n> both worlds. If that could be built into PostgreSQL, it would\n> probably reduce the number of posts requesting support for bad\n> configurations, and help with benchmarks which don't use proper\n> connection pooling for the product; but it would actually not add any\n> capability which isn't there if you do your homework....\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 3 Jun 2009 19:26:56 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Dimitri <[email protected]> wrote: \n> Few weeks ago tested a customer application on 16 cores with Oracle:\n> - 20,000 sessions in total\n> - 70,000 queries/sec\n> \n> without any problem on a mid-range Sun box + Solaris 10..\n \nI'm not sure what point you are trying to make. Could you elaborate?\n \n(If it's that Oracle doesn't need an external connection pool, then\nare you advocating that PostgreSQL include that in the base product?)\n \n-Kevin\n", "msg_date": "Wed, 03 Jun 2009 12:45:31 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "\n\nOn 6/3/09 10:45 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> Dimitri <[email protected]> wrote:\n>> Few weeks ago tested a customer application on 16 cores with Oracle:\n>> - 20,000 sessions in total\n>> - 70,000 queries/sec\n>> \n>> without any problem on a mid-range Sun box + Solaris 10..\n> \n> I'm not sure what point you are trying to make. Could you elaborate?\n> \n> (If it's that Oracle doesn't need an external connection pool, then\n> are you advocating that PostgreSQL include that in the base product?)\n> \n\nHere is how I see it -- not speaking for Dimitri.\n\nAlthough Oracle's connections are heavyweight and expensive to create,\nhaving many of them around and idle does not affect scalability much if at\nall.\n\nPostgres could fix its connection scalability issues -- that is entirely\nindependent of connection pooling.\n\nIn most other databases (all others that I have used), pooling merely\nprevents the expense of connection creation/destruction and helps save some\nRAM and not much else.\nThe fact that it affects scalability and performance beyond that so\ndramatically in Postgres is a problem with Postgres.\n\n\n\n> -Kevin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 3 Jun 2009 11:12:01 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Just to say you don't need a mega server to keep thousands connections\nwith Oracle, it's just trivial, nor CPU affinity and other stuff you\nmay or may not need with Sybase :-)\n\nRegarding PostgreSQL, I think it'll only benefit to have an integrated\nconnection pooler as it'll make happy all populations anyway:\n - those who don't like the idea may always disable it :-)\n - those who have a lot but mostly inactive sessions will be happy to\nsimplify session pooling\n - those who really seeking for the most optimal workload on their\nservers will be happy twice: if there are any PG scalability limits,\nintegrated pooler will be in most cases more performant than external;\nif there are no PG scalability limits - it'll still help to size PG\nmost optimally according a HW or OS capacities..\n\nRgds,\n-Dimitri\n\n\nOn 6/3/09, Kevin Grittner <[email protected]> wrote:\n> Dimitri <[email protected]> wrote:\n>> Few weeks ago tested a customer application on 16 cores with Oracle:\n>> - 20,000 sessions in total\n>> - 70,000 queries/sec\n>>\n>> without any problem on a mid-range Sun box + Solaris 10..\n>\n> I'm not sure what point you are trying to make. Could you elaborate?\n>\n> (If it's that Oracle doesn't need an external connection pool, then\n> are you advocating that PostgreSQL include that in the base product?)\n>\n> -Kevin\n>\n", "msg_date": "Wed, 3 Jun 2009 20:13:39 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey <[email protected]> wrote:\n> Postgres could fix its connection scalability issues -- that is entirely\n> independent of connection pooling.\n\nReally? I'm surprised. I thought the two were very closely related.\nCould you expand on your thinking here?\n\n...Robert\n", "msg_date": "Wed, 3 Jun 2009 14:39:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "\n\nOn 6/3/09 11:39 AM, \"Robert Haas\" <[email protected]> wrote:\n\n> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey <[email protected]> wrote:\n>> Postgres could fix its connection scalability issues -- that is entirely\n>> independent of connection pooling.\n> \n> Really? I'm surprised. I thought the two were very closely related.\n> Could you expand on your thinking here?\n> \n\nThey are closely related only by coincidence of Postgres' flaws.\nIf Postgres did not scale so poorly as idle connections increase (or as\nactive ones increased), they would be rarely needed at all.\n\nMost connection pools in clients (JDBC, ODBC, for example) are designed to\nlimit the connection create/close count, not the number of idle connections.\nThey reduce creation/deletion specifically by leaving connections idle for a\nwhile to allow re-use. . .\n\nOther things that can be called \"connection concentrators\" differ in that\nthey are additionally trying to put a band-aid over server design flaws that\nmake idle connections hurt scalability. Or to prevent resource consumption\nissues that the database doesn't have enough control over on its own (again,\na flaw -- a server should be as resilient to bad client behavior and its\nresource consumption as possible).\n\n\nMost 'modern' server designs throttle active actions internally. Apache's\n(very old, and truly somewhat 1995-ish) process or thread per connection\nmodel is being abandoned for event driven models in the next version, so it\ncan scale like the higher performing web servers to 20K+ keep-alive\nconnections with significantly fewer threads / processes.\n\nSQL is significantly more complicated than HTTP and requires a lot more\nstate which dictates a very different design, but nothing about it requires\nidle connections to cause reduced SMP scalability.\n\nIn addition to making sure idle connections have almost no impact on\nperformance (just eat up some RAM), scalability as active queries increase\nis important. Although the OS is responsible for a lot of this, there are\nmany things that the application can do to help out. If Postgres had a\n\"max_active_connections\" parameter for example, then the memory used by\nwork_mem would be related to this value and not max_connections. This would\nfurther make connection poolers/concentrators less useful from a performance\nand resource management perspective.\n\nOnce the above is done, connection pooling, whether integrated or provided\nby a third party, would mostly only have value for clients who cannot pool\nor cache connections on their own. This is the state of connection pooling\nwith most other DB's today.\n\n> ...Robert\n> \n\n", "msg_date": "Wed, 3 Jun 2009 14:09:45 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "It's not that trivial with Oracle either. I guess you had to use shared \nservers to get to that amount of sessions. They're most of the time not \nactivated by default (dispatchers is at 0).\n\nGranted, they are part of the 'main' product, so you just have to set up \ndispatchers, shared servers, circuits, etc ... but there is still setup to \ndo : dispatchers are (if I recall correctly) a completely manual parameter \n(and every dispatcher can only drive a certain amount of sessions, dependant \non the operating system), where shared servers is a bit more dynamic, but \nstill uses processes (so you may have to tweak max processes also).\n\nWhat I mean to say is that Oracle does something quite alike PostgreSQL + a \nconnection pooler, even if it's more advanced (it's a shared memory structure \nthat is used to send messages between dispatchers and shared servers).\n\nOr did you mean that you had thousands of sessions in dedicated mode ?\n\n\n\nOn Wednesday 03 June 2009 20:13:39 Dimitri wrote:\n> Just to say you don't need a mega server to keep thousands connections\n> with Oracle, it's just trivial, nor CPU affinity and other stuff you\n> may or may not need with Sybase :-)\n>\n> Regarding PostgreSQL, I think it'll only benefit to have an integrated\n> connection pooler as it'll make happy all populations anyway:\n> - those who don't like the idea may always disable it :-)\n> - those who have a lot but mostly inactive sessions will be happy to\n> simplify session pooling\n> - those who really seeking for the most optimal workload on their\n> servers will be happy twice: if there are any PG scalability limits,\n> integrated pooler will be in most cases more performant than external;\n> if there are no PG scalability limits - it'll still help to size PG\n> most optimally according a HW or OS capacities..\n>\n> Rgds,\n> -Dimitri\n>\n> On 6/3/09, Kevin Grittner <[email protected]> wrote:\n> > Dimitri <[email protected]> wrote:\n> >> Few weeks ago tested a customer application on 16 cores with Oracle:\n> >> - 20,000 sessions in total\n> >> - 70,000 queries/sec\n> >>\n> >> without any problem on a mid-range Sun box + Solaris 10..\n> >\n> > I'm not sure what point you are trying to make. Could you elaborate?\n> >\n> > (If it's that Oracle doesn't need an external connection pool, then\n> > are you advocating that PostgreSQL include that in the base product?)\n> >\n> > -Kevin\n\n\n", "msg_date": "Thu, 4 Jun 2009 10:22:14 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey <[email protected]> wrote:\n> On 6/3/09 11:39 AM, \"Robert Haas\" <[email protected]> wrote:\n>> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey <[email protected]> wrote:\n>>> Postgres could fix its connection scalability issues -- that is entirely\n>>> independent of connection pooling.\n>>\n>> Really?  I'm surprised.  I thought the two were very closely related.\n>> Could you expand on your thinking here?\n>\n> They are closely related only by coincidence of Postgres' flaws.\n> If Postgres did not scale so poorly as idle connections increase (or as\n> active ones increased), they would be rarely needed at all.\n>\n> Most connection pools in clients (JDBC, ODBC, for example) are designed to\n> limit the connection create/close count, not the number of idle connections.\n> They reduce creation/deletion specifically by leaving connections idle for a\n> while to allow re-use. . .\n>\n> Other things that can be called \"connection concentrators\" differ in that\n> they are additionally trying to put a band-aid over server design flaws that\n> make idle connections hurt scalability.  Or to prevent resource consumption\n> issues that the database doesn't have enough control over on its own (again,\n> a flaw -- a server should be as resilient to bad client behavior and its\n> resource consumption as possible).\n>\n>\n> Most 'modern' server designs throttle active actions internally.  Apache's\n> (very old, and truly somewhat 1995-ish) process or thread per connection\n> model is being abandoned for event driven models in the next version, so it\n> can scale like the higher performing web servers to 20K+ keep-alive\n> connections with significantly fewer threads / processes.\n>\n> SQL is significantly more complicated than HTTP and requires a lot more\n> state which dictates a very different design, but nothing about it requires\n> idle connections to cause reduced SMP scalability.\n>\n> In addition to making sure idle connections have almost no impact on\n> performance (just eat up some RAM), scalability as active queries increase\n> is important.  Although the OS is responsible for a lot of this, there are\n> many things that the application can do to help out.  If Postgres had a\n> \"max_active_connections\" parameter for example, then the memory used by\n> work_mem would be related to this value and not max_connections.  This would\n> further make connection poolers/concentrators less useful from a performance\n> and resource management perspective.\n>\n> Once the above is done, connection pooling, whether integrated or provided\n> by a third party, would mostly only have value for clients who cannot pool\n> or cache connections on their own.  This is the state of connection pooling\n> with most other DB's today.\n\nI think I see the distinction you're drawing here. IIUC, you're\narguing that other database products use connection pooling to handle\nrapid connect/disconnect cycles and to throttle the number of\nsimultaneous queries, but not to cope with the possibility of large\nnumbers of idle sessions. My limited understanding of why PostgreSQL\nhas a problem in this area is that it has to do with the size of the\nprocess array which must be scanned to derive an MVCC snapshot. I'd\nbe curious to know if anyone thinks that's correct, or not.\n\nAssuming for the moment that it's correct, databases that don't use\nMVCC won't have this problem, but they give up a significant amount of\nscalability in other areas due to increased blocking (in particular,\nwriters will block readers). So how do other databases that *do* use\nMVCC mitigate this problem? The only one that we've discussed here is\nOracle, which seems to get around the problem by having a built-in\nconnection pooler. That gets me back to thinking that the two issues\nare related, unless there's some other technique for dealing with the\nneed to derive snapshots.\n\n...Robert\n", "msg_date": "Thu, 4 Jun 2009 06:57:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "\nOn 6/4/09 3:57 AM, \"Robert Haas\" <[email protected]> wrote:\n\n> On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey <[email protected]> wrote:\n>> On 6/3/09 11:39 AM, \"Robert Haas\" <[email protected]> wrote:\n>>> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey <[email protected]> wrote:\n>>>> Postgres could fix its connection scalability issues -- that is entirely\n>>>> independent of connection pooling.\n>>> \n>>> Really?  I'm surprised.  I thought the two were very closely related.\n>>> Could you expand on your thinking here?\n>> \n>> They are closely related only by coincidence of Postgres' flaws.\n>> If Postgres did not scale so poorly as idle connections increase (or as\n>> active ones increased), they would be rarely needed at all.\n>> \n>> Most connection pools in clients (JDBC, ODBC, for example) are designed to\n>> limit the connection create/close count, not the number of idle connections.\n>> They reduce creation/deletion specifically by leaving connections idle for a\n>> while to allow re-use. . .\n>> \n>> Other things that can be called \"connection concentrators\" differ in that\n>> they are additionally trying to put a band-aid over server design flaws that\n>> make idle connections hurt scalability.  Or to prevent resource consumption\n>> issues that the database doesn't have enough control over on its own (again,\n>> a flaw -- a server should be as resilient to bad client behavior and its\n>> resource consumption as possible).\n>> \n>> \n>> Most 'modern' server designs throttle active actions internally.  Apache's\n>> (very old, and truly somewhat 1995-ish) process or thread per connection\n>> model is being abandoned for event driven models in the next version, so it\n>> can scale like the higher performing web servers to 20K+ keep-alive\n>> connections with significantly fewer threads / processes.\n>> \n>> SQL is significantly more complicated than HTTP and requires a lot more\n>> state which dictates a very different design, but nothing about it requires\n>> idle connections to cause reduced SMP scalability.\n>> \n>> In addition to making sure idle connections have almost no impact on\n>> performance (just eat up some RAM), scalability as active queries increase\n>> is important.  Although the OS is responsible for a lot of this, there are\n>> many things that the application can do to help out.  If Postgres had a\n>> \"max_active_connections\" parameter for example, then the memory used by\n>> work_mem would be related to this value and not max_connections.  This would\n>> further make connection poolers/concentrators less useful from a performance\n>> and resource management perspective.\n>> \n>> Once the above is done, connection pooling, whether integrated or provided\n>> by a third party, would mostly only have value for clients who cannot pool\n>> or cache connections on their own.  This is the state of connection pooling\n>> with most other DB's today.\n> \n> I think I see the distinction you're drawing here. IIUC, you're\n> arguing that other database products use connection pooling to handle\n> rapid connect/disconnect cycles and to throttle the number of\n> simultaneous queries, but not to cope with the possibility of large\n> numbers of idle sessions. My limited understanding of why PostgreSQL\n> has a problem in this area is that it has to do with the size of the\n> process array which must be scanned to derive an MVCC snapshot. I'd\n> be curious to know if anyone thinks that's correct, or not.\n> \n> Assuming for the moment that it's correct, databases that don't use\n> MVCC won't have this problem, but they give up a significant amount of\n> scalability in other areas due to increased blocking (in particular,\n> writers will block readers). So how do other databases that *do* use\n> MVCC mitigate this problem? The only one that we've discussed here is\n> Oracle, which seems to get around the problem by having a built-in\n> connection pooler. That gets me back to thinking that the two issues\n> are related, unless there's some other technique for dealing with the\n> need to derive snapshots.\n> \n> ...Robert\n> \n\nTo clarify if needed:\n\nI'm not saying the two issues are unrelated. I'm saying that the\nrelationship between connection pooling and a database is multi-dimensional,\nand the scalability improvement does not have a hard dependency on\nconnection pooling.\n\nOn one spectrum, you have the raw performance improvement by caching\nconnections so they do not need to be created and destroyed frequently.\nThis is a universal benefit to all databases, though some have higher\noverhead of connection creation than others. Any book on databases\nmentioning connection pools will list this benefit.\n\nOn another spectrum, a connection pool can act as a concurrency throttle.\nThe benefit of such a thing varies greatly from database to database, but\nthe trend for each DB out there has been to solve this issue internally and\nnot trust client or third party tools to prevent concurrency/scalability\nrelated disasters. \n\nThe latter should be treated separately, a solution to it does not have to\naddress the connection creation/destruction efficiency -- almost all clients\nthese days can do that part, and third party tools are simpler if they only\nhave to meet that goal and not also try and reduce idle connection count.\n\nSo a fix to the connection scalability issues only optionally involves what\nmost would call connection pooling.\n\n-------\nPostgres' MVCC nature has something to do with it, but I'm sure there are\nways to significantly improve the current situation. Locks and processor\ncache-line behavior on larger SMP systems are often strangely behaving\nbeasts.\n\n", "msg_date": "Thu, 4 Jun 2009 11:04:30 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, Jun 4, 2009 at 2:04 PM, Scott Carey <[email protected]> wrote:\n> To clarify if needed:\n>\n> I'm not saying the two issues are unrelated.  I'm saying that the\n> relationship between connection pooling and a database is multi-dimensional,\n> and the scalability improvement does not have a hard dependency on\n> connection pooling.\n>\n> On one spectrum, you have the raw performance improvement by caching\n> connections so they do not need to be created and destroyed frequently.\n> This is a universal benefit to all databases, though some have higher\n> overhead of connection creation than others.  Any book on databases\n> mentioning connection pools will list this benefit.\n>\n> On another spectrum, a connection pool can act as a concurrency throttle.\n> The benefit of such a thing varies greatly from database to database, but\n> the trend for each DB out there has been to solve this issue internally and\n> not trust client or third party tools to prevent concurrency/scalability\n> related disasters.\n>\n> The latter should be treated separately, a solution to it does not have to\n> address the connection creation/destruction efficiency -- almost all clients\n> these days can do that part, and third party tools are simpler if they only\n> have to meet that goal and not also try and reduce idle connection count.\n>\n> So a fix to the connection scalability issues only optionally involves what\n> most would call connection pooling.\n>\n> -------\n> Postgres' MVCC nature has something to do with it, but I'm sure there are\n> ways to significantly improve the current situation.  Locks and processor\n> cache-line behavior on larger SMP systems are often strangely behaving\n> beasts.\n\nI think in the particular case of PostgreSQL the only suggestions I've\nheard for improving performance with very large numbers of\nsimultaneous connections are (1) connection caching, not so much\nbecause of the overhead of creating the connection as because it\ninvolves creating a whole new process whose private caches start out\ncold, (2) finding a way to reduce ProcArrayLock contention, and (3)\nreducing the cost of deriving a snapshot. I think (2) and (3) are\nrelated but I'm not sure how closely. As far as I know, Simon is the\nonly one to submit a patch in this area and I think I'm not being\nunfair if I say that that particular patch is mostly nibbling around\nthe edges of the problem. There was a discussion a few months ago on\nsome possible changes to the lock modes of ProcArrayLock, based I\nbelieve on some ideas from Tom (might have been Heikki), but I don't\nthink anyone has coded that or tested it.\n\nWe probably won't be able to make significant improvements in this\narea unless someone comes up with some new, good ideas. I agree with\nyou that there are probably ways to significantly improve the current\nsituation, but I'm not sure anyone has figured out with any degree of\nspecificity what they are.\n\n...Robert\n", "msg_date": "Thu, 4 Jun 2009 14:29:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Kevin Grittner wrote:\n> Sure, but the architecture of those products is based around all the\n> work being done by \"engines\" which try to establish affinity to\n> different CPUs, and loop through the various tasks to be done. You\n> don't get a context switch storm because you normally have the number\n> of engines set at or below the number of CPUs. The down side is that\n> they spend a lot of time spinning around queue access to see if\n> anything has become available to do -- which causes them not to play\n> nice with other processes on the same box.\n> \nThis is just misleading at best. I'm sorry, but (in particular) UNIX \nsystems have routinely\nmanaged large numbers of runnable processes where the run queue lengths are\nlong without such an issue. This is not an issue with the number of \nrunnable threads,\nbut with the way that they wait and what they do.\n\nThe context switch rate reported does not indicate processes using their \ntimeslices\nproductively, unless the load is from a continuous stream of trivial \nRPCs and that\ndoesn't stack up with the good performance and then problematic load \nthat the\nOP reported.\n\n\n", "msg_date": "Thu, 04 Jun 2009 22:25:47 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "James Mansion <[email protected]> wrote: \n> Kevin Grittner wrote:\n>> Sure, but the architecture of those products is based around all\n>> the work being done by \"engines\" which try to establish affinity to\n>> different CPUs, and loop through the various tasks to be done. You\n>> don't get a context switch storm because you normally have the\n>> number of engines set at or below the number of CPUs. The down\n>> side is that they spend a lot of time spinning around queue access\n>> to see if anything has become available to do -- which causes them\n>> not to play nice with other processes on the same box.\n>> \n> This is just misleading at best.\n \nWhat part? Last I checked, Sybase ASE and SQL Server worked as I\ndescribed. Those are the products I was describing. Or is it\nmisleading to say that you aren't likely to get a context switch storm\nif you keep your active thread count at or below the number of CPUs?\n \n> I'm sorry, but (in particular) UNIX systems have routinely\n> managed large numbers of runnable processes where the run queue\n> lengths are long without such an issue.\n \nWell, the OP is looking at tens of thousands of connections. If we\nhave a process per connection, how many tens of thousands can we\nhandle before we get into problems with exhausting possible pid\nnumbers (if nothing else)?\n \n> This is not an issue with the number of runnable threads,\n> but with the way that they wait and what they do.\n \nWell, I rather think it's about both. From a description earlier in\nthis thread, it sounds like Oracle effective builds a connection pool\ninto their product, which gets used by default. The above-referenced\nproducts use a more extreme method of limiting active threads. \nPerhaps they're silly to do so; perhaps not.\n \nI know that if you do use a large number of threads, you have to be\npretty adaptive. In our Java app that pulls data from 72 sources and\nreplicates it to eight, plus feeding it to filters which determine\nwhat publishers for interfaces might be interested, the Sun JVM does\nvery poorly, but the IBM JVM handles it nicely. It seems they use\nvery different techniques for the monitors on objects which\nsynchronize the activity of the threads, and the IBM technique does\nwell when no one monitor is dealing with a very large number of\nblocking threads. They got complaints from people who had thousands\nof threads blocking on one monitor, so they now keep a count and\nswitch techniques for an individual monitor if the count gets too\nhigh.\n \nPerhaps something like that (or some other new approach) might\nmitigate the effects of tens of thousands of processes competing for\nfor a few resources, but it fundamentally seems unwise to turn those\nloose to compete if requests can be queued in some way.\n \n-Kevin\n", "msg_date": "Thu, 04 Jun 2009 17:08:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "James Mansion <[email protected]> wrote: \n \n>> they spend a lot of time spinning around queue access to see if\n>> anything has become available to do -- which causes them not to\n>> play nice with other processes on the same box.\n \n> UNIX systems have routinely managed large numbers of runnable\n> processes where the run queue lengths are long without such an\n> issue.\n \nHmmm... Did you think the queues I mentioned where OS run queues? In\ncase that's a source of misunderstanding, let me clarify.\n \nSybase ASE (and derivatives) have a number of queues to schedule work.\nWhen something needs doing, it's put on a queue. The \"engines\" cycle\nthrough checking these queues for work, using non-blocking methods for\nI/O where possible. There is a configurable parameter for how many\ntimes an engine should check all queues without finding any work to do\nbefore it voluntarily yields its CPU. This number was always a tricky\none to configure, as it would starve other processes on the box if set\ntoo high, and would cause the DBMS to context switch too much if set\ntoo low. Whenever a new release came out, or we changed the other\nprocesses running on a database server, we had to benchmark to see\nwhere the \"sweet spot\" was. We ranged from 16 to 20000 for this value\nat various times.\n \nThose are the queues I meant.\n \n-Kevin\n", "msg_date": "Thu, 04 Jun 2009 17:33:47 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Kevin Grittner wrote:\n> James Mansion <[email protected]> wrote: \n> \n>> Kevin Grittner wrote:\n>> \n>>> Sure, but the architecture of those products is based around all\n>>> the work being done by \"engines\" which try to establish affinity to\n>>> different CPUs, and loop through the various tasks to be done. You\n>>> don't get a context switch storm because you normally have the\n>>> number of engines set at or below the number of CPUs. The down\n>>> side is that they spend a lot of time spinning around queue access\n>>> to see if anything has become available to do -- which causes them\n>>> not to play nice with other processes on the same box.\n>>> \n>>> \n>> This is just misleading at best.\n>> \n> \n> What part? Last I checked, Sybase ASE and SQL Server worked as I\n> described. Those are the products I was describing. Or is it\n> misleading to say that you aren't likely to get a context switch storm\n> if you keep your active thread count at or below the number of CPUs?\n> \n\nContext switch storm is about how the application and runtime implements \nconcurrent accesses to shared resources, not about the potentials of the \noperating system. For example, if threads all spin every time a \ncondition or event is raised, then yes, a context storm probably occurs \nif there are thousands of threads. But, it doesn't have to work that \nway. At it's very simplest, this is the difference between \"wake one \nthread\" (which is then responsible for waking the next thread) vs \"wake \nall threads\". This isn't necessarily the best solution - but it is one \nalternative. Other solutions might involve waking the *right* thread. \nFor example, if I know that a particular thread is waiting on my change \nand it has the highest priority - perhaps I only need to wake that one \nthread. Or, if I know that 10 threads are waiting on my results and can \nact on it, I only need to wake these specific 10 threads. Any system \nwhich actually wakes all threads will probably exhibit scaling limitations.\n\nThe operating system itself only needs to keep threads in the run queue \nif they have work to do. Having thousands of idle thread does not need \nto cost *any* cpu time, if they're kept in an idle thread collection \nseparate from the run queue.\n\n>> I'm sorry, but (in particular) UNIX systems have routinely\n>> managed large numbers of runnable processes where the run queue\n>> lengths are long without such an issue.\n>> \n> Well, the OP is looking at tens of thousands of connections. If we\n> have a process per connection, how many tens of thousands can we\n> handle before we get into problems with exhausting possible pid\n> numbers (if nothing else)?\n> \n\nThis depends if it is 16-bit pid numbers or 32-bit pid numbers. I \nbelieve Linux supports 32-bit pid numbers although I'm not up-to-date on \nwhat the default configurations are for all systems in use today. In \nparticular, Linux 2.6 added support for the O(1) task scheduler, with \nthe express requirement of supporting hundreds of thousands of (mostly \nidle) threads. The support exists. Is it activated or in proper use? I \ndon't know.\n\n> I know that if you do use a large number of threads, you have to be\n> pretty adaptive. In our Java app that pulls data from 72 sources and\n> replicates it to eight, plus feeding it to filters which determine\n> what publishers for interfaces might be interested, the Sun JVM does\n> very poorly, but the IBM JVM handles it nicely. It seems they use\n> very different techniques for the monitors on objects which\n> synchronize the activity of the threads, and the IBM technique does\n> well when no one monitor is dealing with a very large number of\n> blocking threads. They got complaints from people who had thousands\n> of threads blocking on one monitor, so they now keep a count and\n> switch techniques for an individual monitor if the count gets too\n> high.\n> \nCould be, and if so then Sun JVM should really address the problem. \nHowever, having thousands of threads waiting on one monitor probably \nisn't a scalable solution, regardless of whether the JVM is able to \noptimize around your usage pattern or not. Why have thousands of threads \nwaiting on one monitor? That's a bit insane. :-)\n\nYou should really only have as 1X or 2X many threads as there are CPUs \nwaiting on one monitor. Beyond that is waste. The idle threads can be \npooled away, and only activated (with individual monitors which can be \nfar more easily and effectively optimized) when the other threads become \nbusy.\n\n> Perhaps something like that (or some other new approach) might\n> mitigate the effects of tens of thousands of processes competing for\n> for a few resources, but it fundamentally seems unwise to turn those\n> loose to compete if requests can be queued in some way.\n> \n\nAn alternative approach might be: 1) Idle processes not currently \nrunning a transaction do not need to be consulted for their snapshot \n(and other related expenses) - if they are idle for a period of time, \nthey \"unregister\" from the actively used processes list - if they become \nactive again, they \"register\" in the actively used process list, and 2) \nProcesses could be reusable across different connections - they could \nstick around for a period after disconnect, and make themselves \navailable again to serve the next connection.\n\nStill heavy-weight in terms of memory utilization, but cheap in terms of \nother impacts. Without the cost of connection \"pooling\" in the sense of \nrequests always being indirect through a proxy of some sort.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nKevin Grittner wrote:\n\nJames Mansion <[email protected]> wrote: \n \n\nKevin Grittner wrote:\n \n\nSure, but the architecture of those products is based around all\nthe work being done by \"engines\" which try to establish affinity to\ndifferent CPUs, and loop through the various tasks to be done. You\ndon't get a context switch storm because you normally have the\nnumber of engines set at or below the number of CPUs. The down\nside is that they spend a lot of time spinning around queue access\nto see if anything has become available to do -- which causes them\nnot to play nice with other processes on the same box.\n \n \n\nThis is just misleading at best.\n \n\n \nWhat part? Last I checked, Sybase ASE and SQL Server worked as I\ndescribed. Those are the products I was describing. Or is it\nmisleading to say that you aren't likely to get a context switch storm\nif you keep your active thread count at or below the number of CPUs?\n \n\n\nContext switch storm is about how the application and runtime\nimplements concurrent accesses to shared resources, not about the\npotentials of the operating system. For example, if threads all spin\nevery time a condition or event is raised, then yes, a context storm\nprobably occurs if there are thousands of threads. But, it doesn't have\nto work that way. At it's very simplest, this is the difference between\n\"wake one thread\" (which is then responsible for waking the next\nthread) vs \"wake all threads\". This isn't necessarily the best solution\n- but it is one alternative. Other solutions might involve waking the\n*right* thread. For example, if I know that a particular thread is\nwaiting on my change and it has the highest priority - perhaps I only\nneed to wake that one thread. Or, if I know that 10 threads are waiting\non my results and can act on it, I only need to wake these specific 10\nthreads. Any system which actually wakes all threads will probably\nexhibit scaling limitations.\n\nThe operating system itself only needs to keep threads in the run queue\nif they have work to do. Having thousands of idle thread does not need\nto cost *any* cpu time, if they're kept in an idle thread collection\nseparate from the run queue.\n\n\n\nI'm sorry, but (in particular) UNIX systems have routinely\nmanaged large numbers of runnable processes where the run queue\nlengths are long without such an issue.\n \n\nWell, the OP is looking at tens of thousands of connections. If we\nhave a process per connection, how many tens of thousands can we\nhandle before we get into problems with exhausting possible pid\nnumbers (if nothing else)?\n \n\n\nThis depends if it is 16-bit pid numbers or 32-bit pid numbers. I\nbelieve Linux supports 32-bit pid numbers although I'm not up-to-date\non what the default configurations are for all systems in use today. In\nparticular, Linux 2.6 added support for the O(1) task scheduler, with\nthe express requirement of supporting hundreds of thousands of (mostly\nidle) threads. The support exists. Is it activated or in proper use? I\ndon't know.\n\n\nI know that if you do use a large number of threads, you have to be\npretty adaptive. In our Java app that pulls data from 72 sources and\nreplicates it to eight, plus feeding it to filters which determine\nwhat publishers for interfaces might be interested, the Sun JVM does\nvery poorly, but the IBM JVM handles it nicely. It seems they use\nvery different techniques for the monitors on objects which\nsynchronize the activity of the threads, and the IBM technique does\nwell when no one monitor is dealing with a very large number of\nblocking threads. They got complaints from people who had thousands\nof threads blocking on one monitor, so they now keep a count and\nswitch techniques for an individual monitor if the count gets too\nhigh.\n \n\nCould be, and if so then Sun JVM should really address the problem.\nHowever, having thousands of threads waiting on one monitor probably\nisn't a scalable solution, regardless of whether the JVM is able to\noptimize around your usage pattern or not. Why have thousands of\nthreads waiting on one monitor? That's a bit insane. :-)\n\nYou should really only have as 1X or 2X many threads as there are CPUs\nwaiting on one monitor. Beyond that is waste. The idle threads can be\npooled away, and only activated (with individual monitors which can be\nfar more easily and effectively optimized) when the other threads\nbecome busy.\n\n\nPerhaps something like that (or some other new approach) might\nmitigate the effects of tens of thousands of processes competing for\nfor a few resources, but it fundamentally seems unwise to turn those\nloose to compete if requests can be queued in some way.\n \n\n\nAn alternative approach might be: 1) Idle processes not currently\nrunning a transaction do not need to be consulted for their snapshot\n(and other related expenses) - if they are idle for a period of time,\nthey \"unregister\" from the actively used processes list - if they\nbecome active again, they \"register\" in the actively used process list,\nand 2) Processes could be reusable across different connections - they\ncould stick around for a period after disconnect, and make themselves\navailable again to serve the next connection.\n\nStill heavy-weight in terms of memory utilization, but cheap in terms\nof other impacts. Without the cost of connection \"pooling\" in the sense\nof requests always being indirect through a proxy of some sort.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Thu, 04 Jun 2009 19:04:37 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, 4 Jun 2009, Robert Haas wrote:\n\n> On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey <[email protected]> wrote:\n>> On 6/3/09 11:39 AM, \"Robert Haas\" <[email protected]> wrote:\n>>> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey <[email protected]> wrote:\n>>>> Postgres could fix its connection scalability issues -- that is entirely\n>>>> independent of connection pooling.\n>\n> I think I see the distinction you're drawing here. IIUC, you're\n> arguing that other database products use connection pooling to handle\n> rapid connect/disconnect cycles and to throttle the number of\n> simultaneous queries, but not to cope with the possibility of large\n> numbers of idle sessions. My limited understanding of why PostgreSQL\n> has a problem in this area is that it has to do with the size of the\n> process array which must be scanned to derive an MVCC snapshot. I'd\n> be curious to know if anyone thinks that's correct, or not.\n>\n> Assuming for the moment that it's correct, databases that don't use\n> MVCC won't have this problem, but they give up a significant amount of\n> scalability in other areas due to increased blocking (in particular,\n> writers will block readers). So how do other databases that *do* use\n> MVCC mitigate this problem? The only one that we've discussed here is\n> Oracle, which seems to get around the problem by having a built-in\n> connection pooler. That gets me back to thinking that the two issues\n> are related, unless there's some other technique for dealing with the\n> need to derive snapshots.\n\nif this is the case, how hard would it be to have threads add and remove \nthemselves from some list as they get busy/become idle?\n\nI've been puzzled as I've been watching this conversation on what internal \nlocking/lookup is happening that is causing the problems with idle threads \n(the pure memory overhead isn't enough to account for the problems that \nare being reported)\n\nDavid Lang\n", "msg_date": "Thu, 4 Jun 2009 17:51:45 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "\nOn 6/4/09 3:08 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> James Mansion <[email protected]> wrote:\n\n>> I'm sorry, but (in particular) UNIX systems have routinely\n>> managed large numbers of runnable processes where the run queue\n>> lengths are long without such an issue.\n> \n> Well, the OP is looking at tens of thousands of connections. If we\n> have a process per connection, how many tens of thousands can we\n> handle before we get into problems with exhausting possible pid\n> numbers (if nothing else)?\n\nWell, the connections are idle much of the time. The OS doesn't really care\nabout these threads until they are ready to run, and even if they were all\nrunnable, there is little overhead in scheduling.\n\nA context switch storm will only happen if too many threads are woken up\nthat must yield soon after getting to run on the CPU. If you wake up 10,000\nthreads, and they all can get significant work done before yielding no\nmatter what order they run, the system will scale extremely well.\n\nHow the lock data structures are built to avoid cache-line collisions and\nminimize cache line updates can also make or break a concurrency scheme and\nis a bit hardware dependent.\n\n\n> I know that if you do use a large number of threads, you have to be\n> pretty adaptive. In our Java app that pulls data from 72 sources and\n> replicates it to eight, plus feeding it to filters which determine\n> what publishers for interfaces might be interested, the Sun JVM does\n> very poorly, but the IBM JVM handles it nicely. It seems they use\n> very different techniques for the monitors on objects which\n> synchronize the activity of the threads, and the IBM technique does\n> well when no one monitor is dealing with a very large number of\n> blocking threads. They got complaints from people who had thousands\n> of threads blocking on one monitor, so they now keep a count and\n> switch techniques for an individual monitor if the count gets too\n> high.\n> \n\nA generic locking solution must be adaptive, yes. But specific solutions\ntailored to specific use cases rarely need to be adaptive. I would think\nthat the 4 or 5 most important locks or concurrency coordination points in\nPostgres have very specific, unique properties.\n\n> Perhaps something like that (or some other new approach) might\n> mitigate the effects of tens of thousands of processes competing for\n> for a few resources, but it fundamentally seems unwise to turn those\n> loose to compete if requests can be queued in some way.\n> \n> -Kevin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\nThere's a bunch of useful blog posts about locks, concurrency, etc and how\nthey relate to low level hardware here:\nhttp://blogs.sun.com/dave/\n\nIn particular, these are interesting references, (not only for java):\n\nhttp://blogs.sun.com/dave/entry/seqlocks_in_java\nhttp://blogs.sun.com/dave/entry/biased_locking_in_hotspot\nhttp://blogs.sun.com/dave/entry/java_thread_priority_revisted_in\nhttp://blogs.sun.com/dave/entry/hardware_assisted_transactional_read_set\n\n", "msg_date": "Thu, 4 Jun 2009 17:54:50 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, 4 Jun 2009, Mark Mielke wrote:\n\n> Kevin Grittner wrote:\n>> James Mansion <[email protected]> wrote: \n>\n>> I know that if you do use a large number of threads, you have to be\n>> pretty adaptive. In our Java app that pulls data from 72 sources and\n>> replicates it to eight, plus feeding it to filters which determine\n>> what publishers for interfaces might be interested, the Sun JVM does\n>> very poorly, but the IBM JVM handles it nicely. It seems they use\n>> very different techniques for the monitors on objects which\n>> synchronize the activity of the threads, and the IBM technique does\n>> well when no one monitor is dealing with a very large number of\n>> blocking threads. They got complaints from people who had thousands\n>> of threads blocking on one monitor, so they now keep a count and\n>> switch techniques for an individual monitor if the count gets too\n>> high.\n>> \n> Could be, and if so then Sun JVM should really address the problem. However, \n> having thousands of threads waiting on one monitor probably isn't a scalable \n> solution, regardless of whether the JVM is able to optimize around your usage \n> pattern or not. Why have thousands of threads waiting on one monitor? That's \n> a bit insane. :-)\n>\n> You should really only have as 1X or 2X many threads as there are CPUs \n> waiting on one monitor. Beyond that is waste. The idle threads can be pooled \n> away, and only activated (with individual monitors which can be far more \n> easily and effectively optimized) when the other threads become busy.\n\nsometimes the decrease in complexity in the client makes it worthwhile to \n'brute force' things.\n\nthis actually works well for the vast majority of services (including many \ndatabases)\n\nthe question is how much complexity (if any) it adds to postgres to handle \nthis condition better, and what those changes are.\n\n>> Perhaps something like that (or some other new approach) might\n>> mitigate the effects of tens of thousands of processes competing for\n>> for a few resources, but it fundamentally seems unwise to turn those\n>> loose to compete if requests can be queued in some way.\n>> \n>\n> An alternative approach might be: 1) Idle processes not currently running a \n> transaction do not need to be consulted for their snapshot (and other related \n> expenses) - if they are idle for a period of time, they \"unregister\" from the \n> actively used processes list - if they become active again, they \"register\" \n> in the actively used process list,\n\nhow expensive is this register/unregister process? if it's cheap enough do \nit all the time and avoid the complexity of having another config option \nto tweak.\n\n> and 2) Processes could be reusable across \n> different connections - they could stick around for a period after \n> disconnect, and make themselves available again to serve the next connection.\n\ndepending on what criteria you have for the re-use, this could be a \nsignificant win (if you manage to re-use the per process cache much. but \nthis is far more complex.\n\n> Still heavy-weight in terms of memory utilization, but cheap in terms of \n> other impacts. Without the cost of connection \"pooling\" in the sense of \n> requests always being indirect through a proxy of some sort.\n\nit would seem to me that the cost of making the extra hop through the \nexternal pooler would be significantly more than the overhead of idle \nprocesses marking themselvs as such so that they don't get consulted for \nMVCC decisions\n\nDavid Lang\n", "msg_date": "Thu, 4 Jun 2009 18:04:07 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, Jun 4, 2009 at 8:51 PM, <[email protected]> wrote:\n> if this is the case, how hard would it be to have threads add and remove\n> themselves from some list as they get busy/become idle?\n>\n> I've been puzzled as I've been watching this conversation on what internal\n> locking/lookup is happening that is causing the problems with idle threads\n> (the pure memory overhead isn't enough to account for the problems that are\n> being reported)\n\nThat's because this thread has altogether too much theory and\naltogether too little gprof. :-)\n\n...Robert\n", "msg_date": "Thu, 4 Jun 2009 22:07:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "[email protected] wrote:\n> On Thu, 4 Jun 2009, Mark Mielke wrote:\n>> You should really only have as 1X or 2X many threads as there are \n>> CPUs waiting on one monitor. Beyond that is waste. The idle threads \n>> can be pooled away, and only activated (with individual monitors \n>> which can be far more easily and effectively optimized) when the \n>> other threads become busy.\n> sometimes the decrease in complexity in the client makes it worthwhile \n> to 'brute force' things.\n> this actually works well for the vast majority of services (including \n> many databases)\n> the question is how much complexity (if any) it adds to postgres to \n> handle this condition better, and what those changes are.\n\nSure. Locks that are not generally contended, for example, don't deserve \nthe extra complexity. Locks that have any expected frequency of a \n\"context storm\" though, probably make good candidates.\n\n>> An alternative approach might be: 1) Idle processes not currently \n>> running a transaction do not need to be consulted for their snapshot \n>> (and other related expenses) - if they are idle for a period of time, \n>> they \"unregister\" from the actively used processes list - if they \n>> become active again, they \"register\" in the actively used process list,\n> how expensive is this register/unregister process? if it's cheap \n> enough do it all the time and avoid the complexity of having another \n> config option to tweak.\n\nNot really relevant if you look at the \"idle for a period of time\". An \nactive process would not unregister/register. An inactive process, \nthough, after it is not in a commit, and after it hits some time that is \nmany times more than the cost of unregister + register, would free up \nother processes from having to take this process into account, allowing \nfor better scaling. For example, let's say it doesn't unregister itself \nfor 5 seconds.\n\n>> and 2) Processes could be reusable across different connections - \n>> they could stick around for a period after disconnect, and make \n>> themselves available again to serve the next connection.\n> depending on what criteria you have for the re-use, this could be a \n> significant win (if you manage to re-use the per process cache much. \n> but this is far more complex.\n\nDoes it need to be? From a naive perspective - what's the benefit of a \nPostgreSQL process dying, and a new connection getting a new PostgreSQL \nprocess? I suppose bugs in PostgreSQL don't have the opportunity to \naffect later connections, but overall, this seems like an unnecessary \ncost. I was thinking of either: 1) The Apache model, where a PostreSQL \nprocess waits on accept(), or 2) When the PostgreSQL process is done, it \ndoes connection cleanup and then it waits for a file descriptor to be \ntransferred to it through IPC and just starts over using it. Too hand \nwavy? :-)\n\n>> Still heavy-weight in terms of memory utilization, but cheap in terms \n>> of other impacts. Without the cost of connection \"pooling\" in the \n>> sense of requests always being indirect through a proxy of some sort.\n> it would seem to me that the cost of making the extra hop through the \n> external pooler would be significantly more than the overhead of idle \n> processes marking themselvs as such so that they don't get consulted \n> for MVCC decisions\n\nThey're separate ideas to be considered separately on the complexity vs \nbenefit merit.\n\nFor the first - I think we already have an \"external pooler\", in the \nsense of the master process which forks to manage a connection, so it \nalready involves a possible context switch to transfer control. I guess \nthe question is whether or not we can do better than fork(). In \nmulti-threaded programs, it's definitely possible to outdo fork using \nthread pools. Does the same remain true of a multi-process program that \ncommunicates using IPC? I'm not completely sure, although I believe \nApache does achieve this by having the working processes do accept() \nrather than some master process that spawns off new processes on each \nconnection. Apache re-uses the process.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Thu, 04 Jun 2009 23:37:01 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, 4 Jun 2009, Robert Haas wrote:\n\n> That's because this thread has altogether too much theory and\n> altogether too little gprof.\n\nBut running benchmarks and profiling is actual work; that's so much less \nfun than just speculating about what's going on!\n\nThis thread reminds me of Jignesh's \"Proposal of tunable fix for \nscalability of 8.4\" thread from March, except with only a fraction of the \nreal-world detail. There are multiple high-profile locks causing \nscalability concerns at quadruple digit high user counts in the PostgreSQL \ncode base, finding them is easy. Shoot, I know exactly where a couple \nare, and I didn't have to think about it at all--just talked with Jignesh \na couple of times, led me right to them. Fixing them without causing \nregressions in low client count cases, now that's the hard part. No \namount of theoretical discussion advances that any until you're at least \nstaring at a very specific locking problem you've already characterized \nextensively via profiling. And even then, profiling trumps theory every \ntime. This is why I stay out of these discussions and work on boring \nbenchmark tools instead.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 5 Jun 2009 00:13:34 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, 4 Jun 2009, Mark Mielke wrote:\n\n> [email protected] wrote:\n>> On Thu, 4 Jun 2009, Mark Mielke wrote:\n\n>>> An alternative approach might be: 1) Idle processes not currently running \n>>> a transaction do not need to be consulted for their snapshot (and other \n>>> related expenses) - if they are idle for a period of time, they \n>>> \"unregister\" from the actively used processes list - if they become active \n>>> again, they \"register\" in the actively used process list,\n>> how expensive is this register/unregister process? if it's cheap enough do \n>> it all the time and avoid the complexity of having another config option to \n>> tweak.\n>\n> Not really relevant if you look at the \"idle for a period of time\". An active \n> process would not unregister/register. An inactive process, though, after it \n> is not in a commit, and after it hits some time that is many times more than \n> the cost of unregister + register, would free up other processes from having \n> to take this process into account, allowing for better scaling. For example, \n> let's say it doesn't unregister itself for 5 seconds.\n\nto do this you need to have the process set an alarm to wake it up. if \ninstead it just checks \"anything else for me to do?, no?, ok I'll go \ninactive until something comes in\" you have a much simpler application.\n\nthe time needed to change it's status from active to inactive should be \n_extremely_ small, even a tenth of a second should be a few orders of \nmagnatude longer than the time it needs to change it's status.\n\nthis does have potential for thrashing if you have lots of short delays on \na lot of threads, but you would need to have multiple CPUs changing status \nat the same time.\n\n>>> and 2) Processes could be reusable across different connections - they \n>>> could stick around for a period after disconnect, and make themselves \n>>> available again to serve the next connection.\n>> depending on what criteria you have for the re-use, this could be a \n>> significant win (if you manage to re-use the per process cache much. but \n>> this is far more complex.\n>\n> Does it need to be? From a naive perspective - what's the benefit of a \n> PostgreSQL process dying, and a new connection getting a new PostgreSQL \n> process? I suppose bugs in PostgreSQL don't have the opportunity to affect \n> later connections, but overall, this seems like an unnecessary cost. I was \n> thinking of either: 1) The Apache model, where a PostreSQL process waits on \n> accept(), or 2) When the PostgreSQL process is done, it does connection \n> cleanup and then it waits for a file descriptor to be transferred to it \n> through IPC and just starts over using it. Too hand wavy? :-)\n\nif the contents of the cache are significantly different for different \nprocesses (say you are servicing queries to different databases), sending \nthe new request to a process that has the correct hot cache could result \nis a very significant speed up compared to the simple 'first available' \napproach.\n\n>>> Still heavy-weight in terms of memory utilization, but cheap in terms of \n>>> other impacts. Without the cost of connection \"pooling\" in the sense of \n>>> requests always being indirect through a proxy of some sort.\n>> it would seem to me that the cost of making the extra hop through the \n>> external pooler would be significantly more than the overhead of idle \n>> processes marking themselvs as such so that they don't get consulted for \n>> MVCC decisions\n>\n> They're separate ideas to be considered separately on the complexity vs \n> benefit merit.\n>\n> For the first - I think we already have an \"external pooler\", in the sense of \n> the master process which forks to manage a connection, so it already involves \n> a possible context switch to transfer control. I guess the question is \n> whether or not we can do better than fork(). In multi-threaded programs, it's \n> definitely possible to outdo fork using thread pools. Does the same remain \n> true of a multi-process program that communicates using IPC? I'm not \n> completely sure, although I believe Apache does achieve this by having the \n> working processes do accept() rather than some master process that spawns off \n> new processes on each connection. Apache re-uses the process.\n\nthe current limits of postgres are nowhere close to being the limits of \nfork.\n\non my firewalls I use a forking proxy (each new connection forks a new \nprocess to handle that connection), I can get connection rates of tens of \nthousands per second on linux (other systems may be slower).\n\nbut the here isn't the cost of establishing a new connection, it's the \ncost of having idle connections in the system.\n\nDavid Lang\n", "msg_date": "Thu, 4 Jun 2009 21:29:29 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, 5 Jun 2009, Greg Smith wrote:\n\n> On Thu, 4 Jun 2009, Robert Haas wrote:\n>\n>> That's because this thread has altogether too much theory and\n>> altogether too little gprof.\n>\n> But running benchmarks and profiling is actual work; that's so much less fun \n> than just speculating about what's going on!\n>\n> This thread reminds me of Jignesh's \"Proposal of tunable fix for scalability \n> of 8.4\" thread from March, except with only a fraction of the real-world \n> detail. There are multiple high-profile locks causing scalability concerns \n> at quadruple digit high user counts in the PostgreSQL code base, finding them \n> is easy. Shoot, I know exactly where a couple are, and I didn't have to \n> think about it at all--just talked with Jignesh a couple of times, led me \n> right to them. Fixing them without causing regressions in low client count \n> cases, now that's the hard part. No amount of theoretical discussion \n> advances that any until you're at least staring at a very specific locking \n> problem you've already characterized extensively via profiling. And even \n> then, profiling trumps theory every time. This is why I stay out of these \n> discussions and work on boring benchmark tools instead.\n\nactually, as I see it we are a step before that.\n\nit seems that people are arguing that there is no need to look for and fix \nthis sort of thing, on the basis that anyone who trips over these problems \nis doing something wrong to start with and needs to change the behavior of \ntheir app.\n\nDavid Lang\n", "msg_date": "Thu, 4 Jun 2009 21:33:53 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Greg Smith wrote:\n> This thread reminds me of Jignesh's \"Proposal of tunable fix for \n> scalability of 8.4\" thread from March, except with only a fraction of \n> the real-world detail. There are multiple high-profile locks causing \n> scalability concerns at quadruple digit high user counts in the \n> PostgreSQL code base, finding them is easy. Shoot, I know exactly \n> where a couple are, and I didn't have to think about it at all--just \n> talked with Jignesh a couple of times, led me right to them. Fixing \n> them without causing regressions in low client count cases, now that's \n> the hard part. No amount of theoretical discussion advances that any \n> until you're at least staring at a very specific locking problem \n> you've already characterized extensively via profiling. And even \n> then, profiling trumps theory every time. This is why I stay out of \n> these discussions and work on boring benchmark tools instead.\n\nI disagree that profiling trumps theory every time. Profiling is useful \nfor identifying places where the existing architecture exhibits the best \nand worst behaviour. It doesn't tell you whether a different \narchitecture (even a slightly different architecture) would work better \nor worse. It might help identify architecture problems. It does not \nprovide you with architectural solutions.\n\nI think it would be more correct to say that prototyping trumps theory. \nThat is, if somebody has a theory, and they invest time into a \nproof-of-concept patch, and post actual results to show you that \"by \nchanging this code over here to that, I get a N% improvement when using \nthousands of connections, at no measurable cost for the single \nconnection case\", these results will be far more compelling than theory.\n\nStill, it has to involve theory, as not everybody has the time to run \noff and prototype every wild idea. Discussion can determine whether an \nidea has enough merit to be worth investing in a prototype.\n\nI think several valuable theories have been discussed, many of which \ndirectly apply to the domain that PostgreSQL fits within. The question \nisn't about how valuable these theories are - they ARE valuable. The \nquestion is how much support from the team can be gathered to bring \nabout change, and how willing the team is to accept or invest in \narchitectural changes that might take PostgreSQL to the next level. The \nreal problem here is the words \"invest\" and \"might\". That is, people are \nnot going to invest on a \"might\" - people need to be convinced, and for \npeople that don't have a problem today, the motivation to make the \ninvestment is far less.\n\nIn my case, all I have to offer you is theory at this time. I don't have \nthe time to work on PostgreSQL, and I have not invested the time to \nlearn the internals of PostgreSQL well enough to comfortably and \neffectively make changes to implement a theory I might have. I want to \nget there - but there are so many other projects and ideas to pursue, \nand I only have a few hours a day to decide what to spend it on.\n\nYou can tell me \"sorry, your contribution of theory isn't welcome\". In \nfact, that looks like exactly what you have done. :-)\n\nIf the general community agrees with you, I'll stop my contributions of \ntheories. :-)\n\nI think, though, that some of the PostgreSQL architecture is \"old \ntheory\". I have this silly idea that PostgreSQL could one day be better \nthan Oracle (in terms of features and performance - PostgreSQL already \nbeats Oracle on cost :-) ). It won't get there without some significant \nchanges. In only the last few years, I have watched as some pretty \nsignificant changes were introduced into PostgreSQL that significantly \nimproved its performance and feature set. Many of these changes might \nhave started with profiling - but the real change came from applied \ntheory, not from profiling. Bitmap indexes are an example of this. \nProfiling tells you what - that large joins involving OR are slow? It \ntakes theory to answer \"why\" and \"so, what do we do about it?\"\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Fri, 05 Jun 2009 01:56:08 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Mark Mielke <[email protected]> wrote: \n> Kevin Grittner wrote:\n>> James Mansion <[email protected]> wrote: \n>>> Kevin Grittner wrote:\n>>>\n>>>> Sure, but the architecture of those products is based around all\n>>>> the work being done by \"engines\" which try to establish affinity\n>>>> to different CPUs, and loop through the various tasks to be done.\n>>>> You don't get a context switch storm because you normally have\n>>>> the number of engines set at or below the number of CPUs.\n>>>>\n>>> This is just misleading at best.\n>>\n>> What part? Last I checked, Sybase ASE and SQL Server worked as I\n>> described. Those are the products I was describing. Or is it\n>> misleading to say that you aren't likely to get a context switch\n>> storm if you keep your active thread count at or below the number\n>> of CPUs?\n> \n> Context switch storm is about how the application and runtime\n> implements concurrent accesses to shared resources, not about the\n> potentials of the operating system.\n \nI'm really not following how that's responsive to my questions or\npoints, at all. You're making pretty basic and obvious points about\nother ways to avoid the problem, but the fact is that the other\ndatabases people point to as examples of handling large numbers of\nconnections have (so far at least) been ones which solve the problems\nin other ways than what people seem to be proposing. That doesn't\nmean that the techniques used by these other products are the only way\nto solve the issue, or even that they are the best ways; but it does\nmean that pointing to those other products doesn't prove anything\nrelative to what lock optimization is likely to buy us.\n \n> For example, if threads all spin every time a condition or event is\n> raised, then yes, a context storm probably occurs if there are\n> thousands of threads. But, it doesn't have to work that way. At it's\n> very simplest, this is the difference between \"wake one thread\"\n> (which is then responsible for waking the next thread) vs \"wake all\n> threads\". This isn't necessarily the best solution - but it is one\n> alternative. Other solutions might involve waking the *right*\n> thread. For example, if I know that a particular thread is waiting\n> on my change and it has the highest priority - perhaps I only need\n> to wake that one thread. Or, if I know that 10 threads are waiting\n> on my results and can act on it, I only need to wake these specific\n> 10 threads. Any system which actually wakes all threads will\n> probably exhibit scaling limitations.\n \nI would be surprised if any of this is not obvious to all on the list.\n \n>>> I'm sorry, but (in particular) UNIX systems have routinely\n>>> managed large numbers of runnable processes where the run queue\n>>> lengths are long without such an issue.\n>>> \n>> Well, the OP is looking at tens of thousands of connections. If we\n>> have a process per connection, how many tens of thousands can we\n>> handle before we get into problems with exhausting possible pid\n>> numbers (if nothing else)?\n> \n> This depends if it is 16-bit pid numbers or 32-bit pid numbers. I \n> believe Linux supports 32-bit pid numbers although I'm not up-to-date\non \n> what the default configurations are for all systems in use today. In\n\n> particular, Linux 2.6 added support for the O(1) task scheduler, with\n\n> the express requirement of supporting hundreds of thousands of\n(mostly \n> idle) threads. The support exists. Is it activated or in proper use?\nI \n> don't know.\n \nInteresting. I'm running the latest SuSE Enterprise on a 64 bit\nsystem with 128 GB RAM and 16 CPUs, yet my pids and port numbers are\n16 bit. Since I only use a tiny fraction of the available numbers\nusing current techniques, I don't need to look at this yet, but I'll\nkeep it in mind.\n \n>> I know that if you do use a large number of threads, you have to be\n>> pretty adaptive. In our Java app that pulls data from 72 sources\nand\n>> replicates it to eight, plus feeding it to filters which determine\n>> what publishers for interfaces might be interested, the Sun JVM\ndoes\n>> very poorly, but the IBM JVM handles it nicely. It seems they use\n>> very different techniques for the monitors on objects which\n>> synchronize the activity of the threads, and the IBM technique does\n>> well when no one monitor is dealing with a very large number of\n>> blocking threads. They got complaints from people who had\nthousands\n>> of threads blocking on one monitor, so they now keep a count and\n>> switch techniques for an individual monitor if the count gets too\n>> high.\n>> \n> Could be, and if so then Sun JVM should really address the problem.\n \nI wish they would.\n \n> However, having thousands of threads waiting on one monitor probably\n\n> isn't a scalable solution, regardless of whether the JVM is able to \n> optimize around your usage pattern or not. Why have thousands of\nthreads \n> waiting on one monitor? That's a bit insane. :-)\n \nAgreed. We weren't the ones complaining to IBM. :-)\n \n>> Perhaps something like that (or some other new approach) might\n>> mitigate the effects of tens of thousands of processes competing\nfor\n>> for a few resources, but it fundamentally seems unwise to turn\nthose\n>> loose to compete if requests can be queued in some way.\n> \n> An alternative approach might be: 1) Idle processes not currently \n> running a transaction do not need to be consulted for their snapshot\n\n> (and other related expenses) - if they are idle for a period of time,\n\n> they \"unregister\" from the actively used processes list - if they\nbecome \n> active again, they \"register\" in the actively used process list, and\n2) \n> Processes could be reusable across different connections - they could\n\n> stick around for a period after disconnect, and make themselves \n> available again to serve the next connection.\n> \n> Still heavy-weight in terms of memory utilization, but cheap in terms\nof \n> other impacts. Without the cost of connection \"pooling\" in the sense\nof \n> requests always being indirect through a proxy of some sort.\n \nJust guessing here, but I would expect the cost of such forwarding to\nbe pretty insignificant compared to the cost of even parsing the\nquery, much less running it. That would be especially true if the\npool\nwas integrated into the DBMS in a way similar to what was described as\nthe\nOracle default.\n \n-Kevin\n", "msg_date": "Fri, 05 Jun 2009 09:29:11 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Scott Carey <[email protected]> wrote:\n \n> If you wake up 10,000 threads, and they all can get significant work\n> done before yielding no matter what order they run, the system will\n> scale extremely well.\n \nBut with roughly twice the average response time you would get\nthrottling active requests to the minimum needed to keep all resources\nbusy. (Admittedly a hard point to find with precision.)\n \n> I would think that the 4 or 5 most important locks or concurrency\n> coordination points in Postgres have very specific, unique\n> properties.\n \nGiven the wide variety of uses I'd be cautious about such assumptions.\n \n> In particular, these are interesting references, (not only for\njava):\n \nWith this wealth of opinion, perhaps they can soon approach IBM's JVM\nin their ability to support a large number of threads. I'm rooting\nfor them.\n \n-Kevin\n", "msg_date": "Fri, 05 Jun 2009 09:42:45 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, Jun 5, 2009 at 12:33 AM, <[email protected]> wrote:\n> On Fri, 5 Jun 2009, Greg Smith wrote:\n>\n>> On Thu, 4 Jun 2009, Robert Haas wrote:\n>>\n>>> That's because this thread has altogether too much theory and\n>>> altogether too little gprof.\n>>\n>> But running benchmarks and profiling is actual work; that's so much less\n>> fun than just speculating about what's going on!\n>>\n>> This thread reminds me of Jignesh's \"Proposal of tunable fix for\n>> scalability of 8.4\" thread from March, except with only a fraction of the\n>> real-world detail.  There are multiple high-profile locks causing\n>> scalability concerns at quadruple digit high user counts in the PostgreSQL\n>> code base, finding them is easy.  Shoot, I know exactly where a couple are,\n>> and I didn't have to think about it at all--just talked with Jignesh a\n>> couple of times, led me right to them.  Fixing them without causing\n>> regressions in low client count cases, now that's the hard part.  No amount\n>> of theoretical discussion advances that any until you're at least staring at\n>> a very specific locking problem you've already characterized extensively via\n>> profiling.  And even then, profiling trumps theory every time.  This is why\n>> I stay out of these discussions and work on boring benchmark tools instead.\n>\n> actually, as I see it we are a step before that.\n>\n> it seems that people are arguing that there is no need to look for and fix\n> this sort of thing, on the basis that anyone who trips over these problems\n> is doing something wrong to start with and needs to change the behavior of\n> their app.\n\nI have a slightly different take on that. I don't think there's\nactually resistance to improving this situation if someone (or some\ngroup of people) comes up with a good proposal for doing it and writes\na patch and tests it and shows that it helps that case without hurting\nother cases that people care about. And there is clearly great\nwillingness to tell people what to do until that happens: use\nconnection pooling. But if you come back and say, well, I shouldn't\nhave to use connection pooling because it should work without\nconnection pooling, well, OK, but...\n\n...Robert\n", "msg_date": "Fri, 5 Jun 2009 11:19:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Greg Smith wrote:\n> No amount of theoretical discussion advances that any until \n> you're at least staring at a very specific locking problem you've \n> already characterized extensively via profiling. And even then, \n> profiling trumps theory every time.\n\nIn theory, there is no difference between theory and practice. In practice, there is a great deal of difference.\n\nCraig\n", "msg_date": "Fri, 05 Jun 2009 09:01:10 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, 5 Jun 2009, Mark Mielke wrote:\n\n> I disagree that profiling trumps theory every time.\n\nThat's an interesting theory. Unfortunately, profiling shows it doesn't \nwork that way.\n\nLet's see if I can summarize the state of things a bit better here:\n\n1) PostgreSQL stops working as efficiently with >1000 active connections\n\n2) Profiling suggests the first barrier that needs to be resolved to fix \nthat is how the snapshots needed to support MVCC are derived\n\n3) There are multiple patches around that aim to improve that specific \nsituation, but only being tested aggressively by one contributor so far \n(that I'm aware of)\n\n4) Those patches might cause a regression for other workloads, and the \nsection of code involved was very hard to get working well initially. \nBefore any change here will be accepted there needs to be a lot of data \nproving it both does what expected and doesn't introduce a regression.\n\n5) Few people are motivated to get their hands dirty doing the boring \nbenchmarking work to resolve this specific problem because \"use a \nconnection pool\" is a quite good workaround\n\n6) Many other database vendors admit this problem so is hard to solve that \nthey also just suggest using a connection pool\n\nIf anyone wants to work on improving things here, (4) is the sticking \npoint that could use more hands. Not much theory involved, but there is a \nwhole lot of profiling.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 5 Jun 2009 13:02:07 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Thu, 4 Jun 2009, Mark Mielke wrote:\n\n> At it's very simplest, this is the difference between \"wake one thread\" \n> (which is then responsible for waking the next thread) vs \"wake all \n> threads\"....Any system which actually wakes all threads will probably \n> exhibit scaling limitations.\n\nThe prototype patch we got from Jignesh improved his specific workload by \nwaking more waiting processes than were being notified in the current \ncode. The bottleneck that's been best examined so far at high client \ncounts is not because of too much waking, it's caused by not enough.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 5 Jun 2009 13:15:28 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, Jun 5, 2009 at 1:02 PM, Greg Smith<[email protected]> wrote:\n> On Fri, 5 Jun 2009, Mark Mielke wrote:\n>> I disagree that profiling trumps theory every time.\n> That's an interesting theory.  Unfortunately, profiling shows it doesn't\n> work that way.\n\nI had a laugh when I read this, but I can see someone being offended\nby it. Hopefully no one took it that way.\n\n> Let's see if I can summarize the state of things a bit better here:\n>\n> 1) PostgreSQL stops working as efficiently with >1000 active connections\n>\n> 2) Profiling suggests the first barrier that needs to be resolved to fix\n> that is how the snapshots needed to support MVCC are derived\n>\n> 3) There are multiple patches around that aim to improve that specific\n> situation, but only being tested aggressively by one contributor so far\n> (that I'm aware of)\n\nI am actually aware of only two forays into this area that have been\nreduced to code. I am pretty much convinced that Jignesh's\nwake-all-waiters patch is fundamentally - dare I say theoretically -\nunsound, however much it may improve performance for his particular\nworkload. The other is Simon's patch which AIUI is a fast-path for\nthe case where nothing has changed. Are you aware of any others?\n\nThanks for the summary.\n\n...Robert\n", "msg_date": "Fri, 5 Jun 2009 15:48:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" } ]
[ { "msg_contents": "Hi,\nWe have one query which has a left join. If we run this query without \nthe left join, it runs slower than with the left join.\n\n-query with the left join:\n\nEXPLAIN ANALYZE\nSELECT \nartifact.id AS id,\nartifact.priority AS priority,\nitem.title AS title,\nitem.name AS name,\nfield_value2.value AS status,\nfield_value3.value AS category,\nsfuser.username AS submittedByUsername,\nsfuser.full_name AS submittedByFullname,\nsfuser2.username AS assignedToUsername,\nsfuser2.full_name AS assignedToFullname,\nitem.version AS version\nFROM \nsfuser sfuser,\nrelationship relationship,\nitem item,\nfield_value field_value3,\nsfuser sfuser2,\nproject project,\nfield_value field_value2,\nfield_value field_value,\nartifact artifact,\nfolder folder,\nfield_value field_value4\nWHERE \nartifact.id=item.id\nAND item.folder_id=folder.id\nAND folder.project_id=project.id\nAND artifact.group_fv=field_value.id\nAND artifact.status_fv=field_value2.id\nAND artifact.category_fv=field_value3.id\nAND artifact.customer_fv=field_value4.id\nAND item.created_by_id=sfuser.id\nAND relationship.is_deleted=false\nAND relationship.relationship_type_name='ArtifactAssignment'\nAND relationship.origin_id=sfuser2.id\nAND artifact.id=relationship.target_id\nAND item.is_deleted=false\nAND project.path='projects.gl_coconet_performance_improveme'\nAND item.folder_id='tracker3641'\nAND folder.path='tracker.perf_test'\nAND (field_value2.value_class='Open');\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=47645.19..89559.37 rows=1 width=155) (actual time=4411.623..6953.329 rows=71 loops=1)\n Hash Cond: ((folder.project_id)::text = (project.id)::text)\n -> Nested Loop (cost=47640.91..89553.64 rows=384 width=167) (actual time=4411.558..6953.136 rows=71 loops=1)\n -> Index Scan using folder_pk on folder (cost=0.00..4.35 rows=1 width=26) (actual time=0.029..0.032 rows=1 loops=1)\n Index Cond: ('tracker3641'::text = (id)::text)\n Filter: ((path)::text = 'tracker.perf_test'::text)\n -> Nested Loop (cost=47640.91..89545.46 rows=384 width=168) (actual time=4411.525..6953.052 rows=71 loops=1)\n -> Nested Loop (cost=47640.91..89434.35 rows=384 width=150) (actual time=4411.508..6952.049 rows=71 loops=1)\n -> Nested Loop (cost=47640.91..89296.15 rows=384 width=149) (actual time=4411.489..6950.823 rows=71 loops=1)\n -> Nested Loop (cost=47640.91..89157.95 rows=384 width=162) (actual time=4411.469..6949.629 rows=71 loops=1)\n -> Nested Loop (cost=47640.91..89019.74 rows=384 width=175) (actual time=4411.443..6948.289 rows=71 loops=1)\n -> Nested Loop (cost=47640.91..88464.52 rows=1819 width=157) (actual time=4411.418..6947.188 rows=71 loops=1)\n -> Merge Join (cost=47640.91..83661.94 rows=2796 width=158) (actual time=4411.355..6945.443 rows=71 loops=1)\n Merge Cond: ((item.id)::text = \"inner\".\"?column7?\")\n -> Index Scan using item_pk on item (cost=0.00..176865.31 rows=97498 width=88) (actual time=117.304..2405.060 rows=71 loops=1)\n Filter: ((NOT is_deleted) AND ((folder_id)::text = 'tracker3641'::text))\n -> Sort (cost=47640.91..47808.10 rows=66876 width=70) (actual time=4273.919..4401.387 rows=168715 loops=1)\n Sort Key: (artifact.id)::text\n -> Hash Join (cost=9271.96..42281.07 rows=66876 width=70) (actual time=124.119..794.667 rows=184378 loops=1)\n Hash Cond: ((artifact.status_fv)::text = (field_value2.id)::text)\n -> Seq Scan on artifact (cost=0.00..25206.14 rows=475614 width=69) (actual time=0.008..214.459 rows=468173 loops=1)\n -> Hash (cost=8285.92..8285.92 rows=78883 width=27) (actual time=124.031..124.031 rows=79488 loops=1)\n -> Index Scan using field_class_idx on field_value field_value2 (cost=0.00..8285.92 rows=78883 width=27) (actual time=0.049..60.599 rows=79488 loops=1)\n Index Cond: ((value_class)::text = 'Open'::text)\n -> Index Scan using relation_target on relationship (cost=0.00..1.71 rows=1 width=25) (actual time=0.022..0.022 rows=1 loops=71)\n Index Cond: ((artifact.id)::text = (relationship.target_id)::text)\n Filter: ((NOT is_deleted) AND ((relationship_type_name)::text = 'ArtifactAssignment'::text))\n -> Index Scan using sfuser_pk on sfuser (cost=0.00..0.29 rows=1 width=42) (actual time=0.013..0.013 rows=1 loops=71)\n Index Cond: ((item.created_by_id)::text = (sfuser.id)::text)\n -> Index Scan using field_value_pk on field_value field_value4 (cost=0.00..0.35 rows=1 width=13) (actual time=0.017..0.017 rows=1 loops=71)\n Index Cond: ((artifact.customer_fv)::text = (field_value4.id)::text)\n -> Index Scan using field_value_pk on field_value (cost=0.00..0.35 rows=1 width=13) (actual time=0.015..0.015 rows=1 loops=71)\n Index Cond: ((artifact.group_fv)::text = (field_value.id)::text)\n -> Index Scan using field_value_pk on field_value field_value3 (cost=0.00..0.35 rows=1 width=27) (actual time=0.015..0.015 rows=1 loops=71)\n Index Cond: ((artifact.category_fv)::text = (field_value3.id)::text)\n -> Index Scan using sfuser_pk on sfuser sfuser2 (cost=0.00..0.28 rows=1 width=42) (actual time=0.012..0.012 rows=1 loops=71)\n Index Cond: ((relationship.origin_id)::text = (sfuser2.id)::text)\n -> Hash (cost=4.27..4.27 rows=1 width=12) (actual time=0.048..0.048 rows=1 loops=1)\n -> Index Scan using project_path on project (cost=0.00..4.27 rows=1 width=12) (actual time=0.041..0.042 rows=1 loops=1)\n Index Cond: ((path)::text = 'projects.gl_coconet_performance_improveme'::text)\n Total runtime: 6966.099 ms\n\n-same query but without the left join\n\nEXPLAIN ANALYZE\nSELECT \nartifact.id AS id,\nartifact.priority AS priority,\nitem.title AS title,\nitem.name AS name,\nfield_value2.value AS status,\nfield_value3.value AS category,\nsfuser.username AS submittedByUsername,\nsfuser.full_name AS submittedByFullname,\nsfuser2.username AS assignedToUsername,\nsfuser2.full_name AS assignedToFullname,\nitem.version AS version ,\nmntr_subscription.user_id AS monitoringUserId\nFROM \nsfuser sfuser,\nrelationship relationship,\nitem item,\nfield_value field_value3,\nsfuser sfuser2,\nproject project,\nfield_value field_value2,\nfield_value field_value,\nartifact artifact \nLEFT JOIN \nmntr_subscription mntr_subscription\nON \nmntr_subscription.object_key=artifact.id AND ((mntr_subscription.user_id='17272')),\nfolder folder,\nfield_value field_value4\nWHERE \nartifact.id=item.id\nAND item.folder_id=folder.id\nAND folder.project_id=project.id\nAND artifact.group_fv=field_value.id\nAND artifact.status_fv=field_value2.id\nAND artifact.category_fv=field_value3.id\nAND artifact.customer_fv=field_value4.id\nAND item.created_by_id=sfuser.id\nAND relationship.is_deleted=false\nAND relationship.relationship_type_name='ArtifactAssignment'\nAND relationship.origin_id=sfuser2.id\nAND artifact.id=relationship.target_id\nAND item.is_deleted=false\nAND project.path='projects.gl_coconet_performance_improveme'\nAND item.folder_id='tracker3641'\nAND folder.path='tracker.perf_test'\nAND (field_value2.value_class='Open');\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=117.16..102664.10 rows=1 width=167) (actual time=392.383..3412.452 rows=71 loops=1)\n Join Filter: ((folder.project_id)::text = (project.id)::text)\n -> Index Scan using project_path on project (cost=0.00..4.27 rows=1 width=12) (actual time=0.040..0.041 rows=1 loops=1)\n Index Cond: ((path)::text = 'projects.gl_coconet_performance_improveme'::text)\n -> Nested Loop (cost=117.16..102655.03 rows=384 width=179) (actual time=392.331..3412.303 rows=71 loops=1)\n -> Index Scan using folder_pk on folder (cost=0.00..4.35 rows=1 width=26) (actual time=0.034..0.036 rows=1 loops=1)\n Index Cond: ('tracker3641'::text = (id)::text)\n Filter: ((path)::text = 'tracker.perf_test'::text)\n -> Nested Loop (cost=117.16..102646.84 rows=384 width=180) (actual time=392.293..3412.193 rows=71 loops=1)\n -> Nested Loop (cost=117.16..102535.74 rows=384 width=162) (actual time=392.276..3411.189 rows=71 loops=1)\n -> Nested Loop (cost=117.16..102397.53 rows=384 width=161) (actual time=392.258..3409.958 rows=71 loops=1)\n -> Nested Loop (cost=117.16..102259.33 rows=384 width=174) (actual time=392.239..3408.734 rows=71 loops=1)\n -> Nested Loop (cost=117.16..102121.13 rows=384 width=187) (actual time=392.220..3407.479 rows=71 loops=1)\n -> Nested Loop (cost=117.16..101565.91 rows=1819 width=169) (actual time=392.195..3406.360 rows=71 loops=1)\n -> Nested Loop (cost=117.16..96763.32 rows=2796 width=170) (actual time=392.150..3404.791 rows=71 loops=1)\n -> Merge Join (cost=117.16..89555.79 rows=19888 width=169) (actual time=392.092..3403.281 rows=71 loops=1)\n Merge Cond: ((artifact.id)::text = (item.id)::text)\n -> Merge Left Join (cost=117.16..52509.18 rows=475614 width=81) (actual time=0.050..715.999 rows=380704 loops=1)\n Merge Cond: ((artifact.id)::text = \"inner\".\"?column3?\")\n -> Index Scan using artifact_pk on artifact (cost=0.00..51202.63 rows=475614 width=69) (actual time=0.014..424.003 rows=380704 loops=1)\n -> Sort (cost=117.16..117.30 rows=58 width=25) (actual time=0.033..0.033 rows=0 loops=1)\n Sort Key: (mntr_subscription.object_key)::text\n -> Index Scan using mntr_subscr_usrevt on mntr_subscription (cost=0.00..115.46 rows=58 width=25) (actual time=0.018..0.018 rows=0 loops=1)\n Index Cond: ((user_id)::text = '17272'::text)\n -> Index Scan using item_pk on item (cost=0.00..176865.31 rows=97498 width=88) (actual time=116.898..2404.612 rows=71 loops=1)\n Filter: ((NOT is_deleted) AND ((folder_id)::text = 'tracker3641'::text))\n -> Index Scan using field_value_pk on field_value field_value2 (cost=0.00..0.35 rows=1 width=27) (actual time=0.019..0.019 rows=1 loops=71)\n Index Cond: ((artifact.status_fv)::text = (field_value2.id)::text)\n Filter: ((value_class)::text = 'Open'::text)\n -> Index Scan using relation_target on relationship (cost=0.00..1.71 rows=1 width=25) (actual time=0.020..0.020 rows=1 loops=71)\n Index Cond: ((artifact.id)::text = (relationship.target_id)::text)\n Filter: ((NOT is_deleted) AND ((relationship_type_name)::text = 'ArtifactAssignment'::text))\n -> Index Scan using sfuser_pk on sfuser (cost=0.00..0.29 rows=1 width=42) (actual time=0.013..0.014 rows=1 loops=71)\n Index Cond: ((item.created_by_id)::text = (sfuser.id)::text)\n -> Index Scan using field_value_pk on field_value field_value4 (cost=0.00..0.35 rows=1 width=13) (actual time=0.015..0.016 rows=1 loops=71)\n Index Cond: ((artifact.customer_fv)::text = (field_value4.id)::text)\n -> Index Scan using field_value_pk on field_value (cost=0.00..0.35 rows=1 width=13) (actual time=0.015..0.015 rows=1 loops=71)\n Index Cond: ((artifact.group_fv)::text = (field_value.id)::text)\n -> Index Scan using field_value_pk on field_value field_value3 (cost=0.00..0.35 rows=1 width=27) (actual time=0.015..0.015 rows=1 loops=71)\n Index Cond: ((artifact.category_fv)::text = (field_value3.id)::text)\n -> Index Scan using sfuser_pk on sfuser sfuser2 (cost=0.00..0.28 rows=1 width=42) (actual time=0.012..0.012 rows=1 loops=71)\n Index Cond: ((relationship.origin_id)::text = (sfuser2.id)::text)\n Total runtime: 3413.006 ms\n(43 rows)\n\n\nI am having a hard time to understand why the query runs faster with the \nleft join.\n\nCan you help me understanding how that is possible?\n\nThanks,\nAnne\n", "msg_date": "Thu, 28 May 2009 15:46:43 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Unexpected query plan results" }, { "msg_contents": "> From: Anne Rosset\n> Subject: [PERFORM] Unexpected query plan results\n> \n> Hi,\n> We have one query which has a left join. If we run this query without \n> the left join, it runs slower than with the left join.\n[snip]\n> I am having a hard time to understand why the query runs \n> faster with the \n> left join.\n> \n\nIt looks like the query plan for the query without the left join is less\nthan optimal. Adding the left join just seemed to shake things up enough\nthat postgres picked a better plan. The slow step in the query without the\nleft join appears to be sorting the result of a hash join so it can be used\nin a merge join.\n\n -> Sort (cost=47640.91..47808.10 rows=66876 width=70) (actual\ntime=4273.919..4401.387 rows=168715 loops=1)\n Sort Key: (artifact.id)::text\n -> Hash Join (cost=9271.96..42281.07 rows=66876 width=70)\n(actual time=124.119..794.667 rows=184378 loops=1)\n\nThe plan might be sped up by removing the sort or making the sort faster.\nPostgres thinks the Hash Join will only produce 66,876 rows, but it produces\n184,378 rows. If it made a better estimate of the results of the hash join,\nit might not choose this plan. I don't really know if there is a way to\nimprove the estimate on a join when the estimates of the inputs look pretty\ngood. \n\nAs a test you might try disabling sorts by setting enable_sort to false,\nthen run the explain analyze again to see what you get.\n\nYou might be able to make the sort faster by increasing work mem. What do\nyou have work mem set to now and what version of Postgres are you using?\n\n\nDave\n\n", "msg_date": "Fri, 29 May 2009 10:55:45 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "Dave Dutcher wrote:\n\n>>From: Anne Rosset\n>>Subject: [PERFORM] Unexpected query plan results\n>>\n>>Hi,\n>>We have one query which has a left join. If we run this query without \n>>the left join, it runs slower than with the left join.\n>> \n>>\n>[snip]\n> \n>\n>>I am having a hard time to understand why the query runs \n>>faster with the \n>>left join.\n>>\n>> \n>>\n>\n>It looks like the query plan for the query without the left join is less\n>than optimal. Adding the left join just seemed to shake things up enough\n>that postgres picked a better plan. The slow step in the query without the\n>left join appears to be sorting the result of a hash join so it can be used\n>in a merge join.\n>\n> -> Sort (cost=47640.91..47808.10 rows=66876 width=70) (actual\n>time=4273.919..4401.387 rows=168715 loops=1)\n> Sort Key: (artifact.id)::text\n> -> Hash Join (cost=9271.96..42281.07 rows=66876 width=70)\n>(actual time=124.119..794.667 rows=184378 loops=1)\n>\n>The plan might be sped up by removing the sort or making the sort faster.\n>Postgres thinks the Hash Join will only produce 66,876 rows, but it produces\n>184,378 rows. If it made a better estimate of the results of the hash join,\n>it might not choose this plan. I don't really know if there is a way to\n>improve the estimate on a join when the estimates of the inputs look pretty\n>good. \n>\n>As a test you might try disabling sorts by setting enable_sort to false,\n>then run the explain analyze again to see what you get.\n>\n>You might be able to make the sort faster by increasing work mem. What do\n>you have work mem set to now and what version of Postgres are you using?\n>\n>\n>Dave\n>\n> \n>\nThank Dave. We are using postgresql-server-8.2.4-1PGDG and have work-mem \nset to 20MB.\nWhat value would you advise?\nthanks,\n\nAnne\n", "msg_date": "Fri, 29 May 2009 09:05:04 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "> From: Anne Rosset\n> Subject: Re: [PERFORM] Unexpected query plan results\n> > \n> >\n> Thank Dave. We are using postgresql-server-8.2.4-1PGDG and \n> have work-mem set to 20MB.\n> What value would you advise?\n> thanks,\n> \n> Anne\n\n\nWork-mem is kind of tricky because the right setting depends on how much ram\nyour machine has, is the machine dedicated to postgres, and how many\nsimultaneous connections you have. If this is a test server, and not used\nin production, you could just play around with the setting and see if your\nquery gets any faster. \n\nHere are the docs on work mem\n\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html#\nRUNTIME-CONFIG-RESOURCE-MEMORY\n\n", "msg_date": "Fri, 29 May 2009 12:30:38 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "On Fri, May 29, 2009 at 1:30 PM, Dave Dutcher <[email protected]> wrote:\n\n> > From: Anne Rosset\n> > Subject: Re: [PERFORM] Unexpected query plan results\n> > >\n> > >\n> > Thank Dave. We are using postgresql-server-8.2.4-1PGDG and\n> > have work-mem set to 20MB.\n> > What value would you advise?\n> > thanks,\n> >\n> > Anne\n>\n>\n> Work-mem is kind of tricky because the right setting depends on how much\n> ram\n> your machine has, is the machine dedicated to postgres, and how many\n> simultaneous connections you have. If this is a test server, and not used\n> in production, you could just play around with the setting and see if your\n> query gets any faster.\n\n\n Right, the trick to remember is that you could possibly end up in a\nscenario where you have max_connections * work_mem being used just for\nsorting / joins and the rest of your memory will be swapped, so be careful\nnot to push too high. Also, work_mem is not going to be fully allocated at\nfork time, it'll only use up to that much as needed.\n\n--Scott\n\n\n>\n> Here are the docs on work mem\n>\n>\n> http://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html#\n> RUNTIME-CONFIG-RESOURCE-MEMORY\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Fri, May 29, 2009 at 1:30 PM, Dave Dutcher <[email protected]> wrote:\n> From: Anne Rosset\n> Subject: Re: [PERFORM] Unexpected query plan results\n> >\n> >\n> Thank Dave. We are using postgresql-server-8.2.4-1PGDG and\n> have work-mem set to 20MB.\n> What value would you advise?\n> thanks,\n>\n> Anne\n\n\nWork-mem is kind of tricky because the right setting depends on how much ram\nyour machine has, is the machine dedicated to postgres, and how many\nsimultaneous connections you have.  If this is a test server, and not used\nin production, you could just play around with the setting and see if your\nquery gets any faster.  Right, the trick to remember is that you could possibly end up in a scenario where you have max_connections * work_mem being used just for sorting / joins and the rest of your memory will be swapped, so be careful not to push too high.  Also, work_mem is not going to be fully allocated at fork time, it'll only use up to that much as needed.\n--Scott\n\nHere are the docs on work mem\n\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html#\nRUNTIME-CONFIG-RESOURCE-MEMORY\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 29 May 2009 15:08:43 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "Dave Dutcher wrote:\n\n>>From: Anne Rosset\n>>Subject: Re: [PERFORM] Unexpected query plan results\n>> \n>>\n>>> \n>>>\n>>> \n>>>\n>>Thank Dave. We are using postgresql-server-8.2.4-1PGDG and \n>>have work-mem set to 20MB.\n>>What value would you advise?\n>>thanks,\n>>\n>>Anne\n>> \n>>\n>\n>\n>Work-mem is kind of tricky because the right setting depends on how much ram\n>your machine has, is the machine dedicated to postgres, and how many\n>simultaneous connections you have. If this is a test server, and not used\n>in production, you could just play around with the setting and see if your\n>query gets any faster. \n>\n>Here are the docs on work mem\n>\n>http://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html#\n>RUNTIME-CONFIG-RESOURCE-MEMORY\n>\n> \n>\nThanks Dave.\nThe result with enable_sort=false is much better (at least the left join \nis not having better result): Now I am getting a 4s runtime.\n( I also got the same performance by setting enable_mergejoin to false).\n\nDo you see anything I could do to make it faster?\nWhen the query plan takes a wrong path, is it possible that it is \nbecause statistics have not been run or updated?\n\nThanks\nAnne\n\n\n \nQUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------\n Hash Join (cost=9276.24..100313.55 rows=1 width=155) (actual \ntime=168.148..4144.595 rows=71 loops=1)\n Hash Cond: ((folder.project_id)::text = (project.id)::text)\n -> Nested Loop (cost=9271.96..100302.44 rows=1819 width=167) \n(actual time=168.080..4144.363 rows=71 loops=1)\n -> Index Scan using folder_pk on folder (cost=0.00..4.35 \nrows=1 width=26) (actual time=0.029..0.032 rows=1 loops=1)\n Index Cond: ('tracker3641'::text = (id)::text)\n Filter: ((path)::text = 'tracker.perf_test'::text)\n -> Nested Loop (cost=9271.96..100279.90 rows=1819 width=168) \n(actual time=168.045..4144.249 rows=71 loops=1)\n -> Nested Loop (cost=9271.96..99724.69 rows=1819 \nwidth=150) (actual time=168.028..4143.126 rows=71 loops=1)\n -> Nested Loop (cost=9271.96..99198.39 rows=1819 \nwidth=132) (actual time=168.008..4141.973 rows=71 loops=1)\n -> Nested Loop (cost=9271.96..98543.72 \nrows=1819 width=131) (actual time=167.989..4140.718 rows=71 loops=1)\n -> Nested Loop \n(cost=9271.96..97889.05 rows=1819 width=144) (actual \ntime=167.971..4139.482 rows=71 loops=1)\n -> Nested Loop \n(cost=9271.96..97234.38 rows=1819 width=157) (actual \ntime=167.943..4137.998 rows=71 loops=1)\n -> Nested Loop \n(cost=9271.96..92431.80 rows=2796 width=158) (actual \ntime=167.893..4136.297 rows=71 loops=1)\n -> Hash Join \n(cost=9271.96..42281.07 rows=66876 width=70) (actual \ntime=125.019..782.122 rows=184378 loops=1)\n Hash Cond: \n((artifact.status_fv)::text = (field_value2.id)::text)\n -> Seq Scan on \nartifact (cost=0.00..25206.14 rows=475614 width=69) (actual \ntime=0.006..211.907 rows=468173 loops=1\n)\n -> Hash \n(cost=8285.92..8285.92 rows=78883 width=27) (actual \ntime=124.929..124.929 rows=79488 loops=1)\n -> Index \nScan using field_class_idx on field_value field_value2 \n(cost=0.00..8285.92 rows=78883 width=27) (ac\ntual time=0.040..60.861 rows=79488 loops=1)\n \nIndex Cond: ((value_class)::text = 'Open'::text)\n -> Index Scan using \nitem_pk on item (cost=0.00..0.74 rows=1 width=88) (actual \ntime=0.018..0.018 rows=0 loops=184378)\n Index Cond: \n((artifact.id)::text = (item.id)::text)\n Filter: ((NOT \nis_deleted) AND ((folder_id)::text = 'tracker3641'::text))\n -> Index Scan using \nrelation_target on relationship (cost=0.00..1.71 rows=1 width=25) \n(actual time=0.021..0.022 rows=1 loops=7\n1)\n Index Cond: \n((artifact.id)::text = (relationship.target_id)::text)\n Filter: ((NOT \nis_deleted) AND ((relationship_type_name)::text = \n'ArtifactAssignment'::text))\n -> Index Scan using \nfield_value_pk on field_value field_value4 (cost=0.00..0.35 rows=1 \nwidth=13) (actual time=0.018..0.019 rows=1 lo\nops=71)\n Index Cond: \n((artifact.customer_fv)::text = (field_value4.id)::text)\n -> Index Scan using field_value_pk on \nfield_value (cost=0.00..0.35 rows=1 width=13) (actual time=0.015..0.015 \nrows=1 loops=71)\n Index Cond: \n((artifact.group_fv)::text = (field_value.id)::text)\n -> Index Scan using field_value_pk on \nfield_value field_value3 (cost=0.00..0.35 rows=1 width=27) (actual \ntime=0.015..0.015 rows=1 loops=71)\n Index Cond: \n((artifact.category_fv)::text = (field_value3.id)::text)\n -> Index Scan using sfuser_pk on sfuser sfuser2 \n(cost=0.00..0.28 rows=1 width=42) (actual time=0.013..0.014 rows=1 loops=71)\n Index Cond: ((relationship.origin_id)::text = \n(sfuser2.id)::text)\n -> Index Scan using sfuser_pk on sfuser \n(cost=0.00..0.29 rows=1 width=42) (actual time=0.013..0.014 rows=1 loops=71)\n Index Cond: ((item.created_by_id)::text = \n(sfuser.id)::text)\n -> Hash (cost=4.27..4.27 rows=1 width=12) (actual time=0.047..0.047 \nrows=1 loops=1)\n -> Index Scan using project_path on project (cost=0.00..4.27 \nrows=1 width=12) (actual time=0.041..0.043 rows=1 loops=1)\n Index Cond: ((path)::text = \n'projects.gl_coconet_performance_improveme'::text)\n Total runtime: 4146.198 ms\n(39 rows)\n\n\n\n\n\n", "msg_date": "Fri, 29 May 2009 14:31:57 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "On Thu, May 28, 2009 at 6:46 PM, Anne Rosset <[email protected]> wrote:\n>                                                  ->  Index Scan using\n> item_pk on item  (cost=0.00..176865.31 rows=97498 width=88) (actual\n> time=117.304..2405.060 rows=71 loops=1)\n>                                                        Filter: ((NOT\n> is_deleted) AND ((folder_id)::text = 'tracker3641'::text))\n\nThe fact that the estimated row count differs from the actual row\ncount by a factor of more than 1000 is likely the root cause of your\nproblem here. You probably want to figure out why that's happening.\nHow many rows are in that table and what value are you using for\ndefault_statistics_target?\n\n...Robert\n", "msg_date": "Fri, 29 May 2009 17:32:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "> When the query plan takes a wrong path, is it possible that it is because\n> statistics have not been run or updated?\n\nYes. If you are not using autovacuum, you need to ANALYZE regularly,\nor bad things will happen to you.\n\n...Robert\n", "msg_date": "Fri, 29 May 2009 17:34:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "Robert Haas wrote:\n\n>On Thu, May 28, 2009 at 6:46 PM, Anne Rosset <[email protected]> wrote:\n> \n>\n>> -> Index Scan using\n>>item_pk on item (cost=0.00..176865.31 rows=97498 width=88) (actual\n>>time=117.304..2405.060 rows=71 loops=1)\n>> Filter: ((NOT\n>>is_deleted) AND ((folder_id)::text = 'tracker3641'::text))\n>> \n>>\n>\n>The fact that the estimated row count differs from the actual row\n>count by a factor of more than 1000 is likely the root cause of your\n>problem here. You probably want to figure out why that's happening.\n>How many rows are in that table and what value are you using for\n>default_statistics_target?\n>\n>...Robert\n> \n>\n\nThe table has 468173 rows and the value for default_statistics_target is \n750.\nAnne\n", "msg_date": "Fri, 29 May 2009 14:57:30 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "On Fri, May 29, 2009 at 5:57 PM, Anne Rosset <[email protected]> wrote:\n> Robert Haas wrote:\n>\n>> On Thu, May 28, 2009 at 6:46 PM, Anne Rosset <[email protected]> wrote:\n>>\n>>>\n>>>                                                ->  Index Scan using\n>>> item_pk on item  (cost=0.00..176865.31 rows=97498 width=88) (actual\n>>> time=117.304..2405.060 rows=71 loops=1)\n>>>                                                      Filter: ((NOT\n>>> is_deleted) AND ((folder_id)::text = 'tracker3641'::text))\n>>>\n>>\n>> The fact that the estimated row count differs from the actual row\n>> count by a factor of more than 1000 is likely the root cause of your\n>> problem here.  You probably want to figure out why that's happening.\n>> How many rows are in that table and what value are you using for\n>> default_statistics_target?\n>>\n>> ...Robert\n>>\n>\n> The table has 468173 rows and the value for default_statistics_target is\n> 750.\n> Anne\n\nOK, that sounds good. If you haven't run ANALYZE or VACUUM ANALYZE\nrecently, you should do that first and see if it fixes anything.\nOtherwise, maybe there's a hidden correlation between the deleted\ncolumn and the folder_id column. We can assess that like this:\n\nSELECT SUM(1) FROM item WHERE NOT deleted;\nSELECT SUM(1) FROM item WHERE folder_id = 'tracker3641';\nSELECT SUM(1) FROM item WHERE folder_id = 'tracker3641' AND NOT deleted;\n\nCan you try that and send along the results?\n\nThanks,\n\n...Robert\n", "msg_date": "Fri, 29 May 2009 18:15:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "Robert Haas wrote:\n\n>On Fri, May 29, 2009 at 5:57 PM, Anne Rosset <[email protected]> wrote:\n> \n>\n>>Robert Haas wrote:\n>>\n>> \n>>\n>>>On Thu, May 28, 2009 at 6:46 PM, Anne Rosset <[email protected]> wrote:\n>>>\n>>> \n>>>\n>>>> -> Index Scan using\n>>>>item_pk on item (cost=0.00..176865.31 rows=97498 width=88) (actual\n>>>>time=117.304..2405.060 rows=71 loops=1)\n>>>> Filter: ((NOT\n>>>>is_deleted) AND ((folder_id)::text = 'tracker3641'::text))\n>>>>\n>>>> \n>>>>\n>>>The fact that the estimated row count differs from the actual row\n>>>count by a factor of more than 1000 is likely the root cause of your\n>>>problem here. You probably want to figure out why that's happening.\n>>>How many rows are in that table and what value are you using for\n>>>default_statistics_target?\n>>>\n>>>...Robert\n>>>\n>>> \n>>>\n>>The table has 468173 rows and the value for default_statistics_target is\n>>750.\n>>Anne\n>> \n>>\n>\n>OK, that sounds good. If you haven't run ANALYZE or VACUUM ANALYZE\n>recently, you should do that first and see if it fixes anything.\n>Otherwise, maybe there's a hidden correlation between the deleted\n>column and the folder_id column. We can assess that like this:\n>\n>SELECT SUM(1) FROM item WHERE NOT deleted;\n>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641';\n>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641' AND NOT deleted;\n>\n>Can you try that and send along the results?\n>\n>Thanks,\n>\n>...Robert\n> \n>\nHi Robert,\nwe did a vacuum analyze and the results are the same.\nHere are the results of the queries :\n\nSELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592 (1 \nrow)\nSELECT SUM(1) FROM item WHERE folder_id = 'tracker3641 \n</sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum -------- \n122412 (1 row)\nSELECT SUM(1) FROM item WHERE folder_id = 'tracker3641 \n</sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND \nis_deleted = 'f'; sum ----- 71 (1 row)\nSELECT SUM(1) FROM item WHERE folder_id = 'tracker3641 \n</sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND \nis_deleted = 't'; sum -------- 122341 (1 row)\n\nThanks for your help,\nAnne\n", "msg_date": "Mon, 01 Jun 2009 11:14:47 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset <[email protected]> wrote:\n>>> The table has 468173 rows and the value for default_statistics_target is\n>>> 750.\n>>> Anne\n> Hi Robert,\n> we did a vacuum analyze and the results are the same.\n> Here are the results of the queries :\n>\n> SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592 (1\n> row)\n> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum --------\n> 122412 (1 row)\n> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted =\n> 'f'; sum ----- 71 (1 row)\n> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted =\n> 't'; sum -------- 122341 (1 row)\n\nSomething's not right here. If the whole table has only 468173 rows,\nyou can't have 1.8 million deleted rows where is_deleted = false.\n\n...Robert\n", "msg_date": "Mon, 1 Jun 2009 16:24:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "Robert Haas wrote:\n\n>On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset <[email protected]> wrote:\n> \n>\n>>>>The table has 468173 rows and the value for default_statistics_target is\n>>>>750.\n>>>>Anne\n>>>> \n>>>>\n>>Hi Robert,\n>>we did a vacuum analyze and the results are the same.\n>>Here are the results of the queries :\n>>\n>>SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592 (1\n>>row)\n>>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum --------\n>>122412 (1 row)\n>>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted =\n>>'f'; sum ----- 71 (1 row)\n>>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted =\n>>'t'; sum -------- 122341 (1 row)\n>> \n>>\n>\n>Something's not right here. If the whole table has only 468173 rows,\n>you can't have 1.8 million deleted rows where is_deleted = false.\n>\n>...Robert\n> \n>\nThe item table has 2324829 rows\nThe artifact table has 468173 rows.\nThanks,\nAnne\n", "msg_date": "Mon, 01 Jun 2009 13:53:24 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "> -----Original Message-----\n> From: Anne Rosset\n> Subject: Re: [PERFORM] Unexpected query plan results\n> \n> >>\n> >>SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum \n> --------- 1824592 \n> >>(1\n> >>row)\n> >>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641 \n> >></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum \n> >>--------\n> >>122412 (1 row)\n> >>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641 \n> >></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND \n> >>is_deleted = 'f'; sum ----- 71 (1 row) SELECT SUM(1) FROM \n> item WHERE \n> >>folder_id = 'tracker3641 \n> >></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND \n> >>is_deleted = 't'; sum -------- 122341 (1 row)\n> >> \n> >>\n> >\n> >Something's not right here. If the whole table has only \n> 468173 rows, \n> >you can't have 1.8 million deleted rows where is_deleted = false.\n> >\n> >...Robert\n> > \n> >\n> The item table has 2324829 rows\n> The artifact table has 468173 rows.\n> Thanks,\n> Anne\n\nI'd been thinking about the sort, but I hadn't thought yet if that index\nscan on item could be made faster. Could you post the table definition of\nitem including the indexes on it?\n\nDave\n\n\n \n\n", "msg_date": "Mon, 1 Jun 2009 16:07:01 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "Dave Dutcher wrote:\n\n>>-----Original Message-----\n>>From: Anne Rosset\n>>Subject: Re: [PERFORM] Unexpected query plan results\n>>\n>> \n>>\n>>>>SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum \n>>>> \n>>>>\n>>--------- 1824592 \n>> \n>>\n>>>>(1\n>>>>row)\n>>>>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641 \n>>>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum \n>>>>--------\n>>>>122412 (1 row)\n>>>>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641 \n>>>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND \n>>>>is_deleted = 'f'; sum ----- 71 (1 row) SELECT SUM(1) FROM \n>>>> \n>>>>\n>>item WHERE \n>> \n>>\n>>>>folder_id = 'tracker3641 \n>>>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND \n>>>>is_deleted = 't'; sum -------- 122341 (1 row)\n>>>> \n>>>>\n>>>> \n>>>>\n>>>Something's not right here. If the whole table has only \n>>> \n>>>\n>>468173 rows, \n>> \n>>\n>>>you can't have 1.8 million deleted rows where is_deleted = false.\n>>>\n>>>...Robert\n>>> \n>>>\n>>> \n>>>\n>>The item table has 2324829 rows\n>>The artifact table has 468173 rows.\n>>Thanks,\n>>Anne\n>> \n>>\n>\n>I'd been thinking about the sort, but I hadn't thought yet if that index\n>scan on item could be made faster. Could you post the table definition of\n>item including the indexes on it?\n>\n>Dave\n>\n>\n> \n>\n> \n>\nDave:\n Table \"public.item\"\n Column | Type | Modifiers\n---------------------+--------------------------+-----------\n id | character varying(32) | not null\n name | character varying(128) |\n title | character varying(255) |\n version | integer | not null\n date_created | timestamp with time zone | not null\n date_last_modified | timestamp with time zone | not null\n is_deleted | boolean | not null\n type_id | character varying(32) |\n folder_id | character varying(32) |\n planning_folder_id | character varying(32) |\n created_by_id | character varying(32) |\n last_modified_by_id | character varying(32) |\nIndexes:\n \"item_pk\" primary key, btree (id)\n \"item_created_by_id\" btree (created_by_id)\n \"item_date_created\" btree (date_created)\n \"item_folder\" btree (folder_id)\n \"item_name\" btree (name)\n \"item_planning_folder\" btree (planning_folder_id)\n\nThanks,\nAnne\n", "msg_date": "Mon, 01 Jun 2009 14:10:25 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "On Mon, Jun 1, 2009 at 4:53 PM, Anne Rosset <[email protected]> wrote:\n>> On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset <[email protected]> wrote:\n>>> SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592 (1\n>>> row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum --------\n>>> 122412 (1 row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted\n>>> =\n>>> 'f'; sum ----- 71 (1 row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted\n>>> =\n>>> 't'; sum -------- 122341 (1 row)\n>\n> The item table has 2324829 rows\n\nSo 1824592/2324829 = 78.4% of the rows have is_deleted = false, and\n0.06709% of the rows have the relevant folder_id. Therefore the\nplanner assumes that there will be 2324829 * 78.4% * 0.06709% =~\n96,000 rows that satisfy both criteria (the original explain had\n97,000; there's some variability due to the fact that the analyze only\nsamples a random subset of pages), but the real number is 71, leading\nit to make a very bad decision. This is a classic \"hidden\ncorrelation\" problem, where two columns are correlated but the planner\ndoesn't notice, and you get a terrible plan.\n\nUnfortunately, I'm not aware of any real good solution to this\nproblem. The two obvious approaches are multi-column statistics and\nplanner hints; PostgreSQL supports neither. There are various\npossible hacks that aren't very satisfying, such as:\n\n1. Redesign the application to put the deleted records in a separate\ntable from the non-deleted records. But if the deleted records still\nhave child records in other tables, this won't fly due to foreign key\nproblems.\n\n2. Inserting a clause that the optimizer doesn't understand to fool it\ninto thinking that the scan on the item table is much more selective\nthan is exactly the case. I think adding (item.id + 0) = (item.id +\n0) to the WHERE clause will work; the planner will brilliantly\nestimate the selectivity of that expression as one in 200. The\nproblem with this is that it will likely lead to a better plan in this\nparticular case, but for other folder_ids it may make things worse.\nThere's also no guarantee that a future version of PostgreSQL won't be\nsmart enough to see through this type of sophistry, though I think\nyou're safe as far as the forthcoming 8.4 release is concerned.\n\n3. A hack that makes me gag, but it actually seems to work...\n\nCREATE OR REPLACE FUNCTION item_squash(varchar, boolean) RETURNS varchar[] AS $$\nSELECT array[$1, CASE WHEN $2 THEN 'true' ELSE 'false' END]\n$$ LANGUAGE sql IMMUTABLE;\n\nCREATE INDEX item_squash_idx ON item (item_squash(folder_id, is_deleted));\n\n...and then remove \"folder_id = XXX AND is_deleted = YYY\" from your\nquery and substitute \"item_squash(folder_id, is_deleted) =\nitem_squash(XXX, YYY)\". The expresson index forces the planner to\ngather statistics on the distribution of values for that expression,\nand if you then write a query using that exact same expression the\nplanner can take advantage of it.\n\n...Robert\n", "msg_date": "Mon, 1 Jun 2009 18:58:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "2009/6/2 Robert Haas <[email protected]>\n\n> On Mon, Jun 1, 2009 at 4:53 PM, Anne Rosset <[email protected]> wrote:\n> >> On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset <[email protected]> wrote:\n> >>> SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592\n> (1\n> >>> row)\n> >>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> >>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum\n> --------\n> >>> 122412 (1 row)\n> >>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> >>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND\n> is_deleted\n> >>> =\n> >>> 'f'; sum ----- 71 (1 row)\n> >>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> >>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND\n> is_deleted\n> >>> =\n> >>> 't'; sum -------- 122341 (1 row)\n> >\n> > The item table has 2324829 rows\n>\n> So 1824592/2324829 = 78.4% of the rows have is_deleted = false, and\n> 0.06709% of the rows have the relevant folder_id. Therefore the\n> planner assumes that there will be 2324829 * 78.4% * 0.06709% =~\n> 96,000 rows that satisfy both criteria (the original explain had\n> 97,000; there's some variability due to the fact that the analyze only\n> samples a random subset of pages), but the real number is 71, leading\n> it to make a very bad decision. This is a classic \"hidden\n> correlation\" problem, where two columns are correlated but the planner\n> doesn't notice, and you get a terrible plan.\n>\n> Unfortunately, I'm not aware of any real good solution to this\n> problem. The two obvious approaches are multi-column statistics and\n> planner hints; PostgreSQL supports neither.\n>\n\nHow about partial index (create index idx on item(folder_id) where not\nis_deleted)? Won't it have required statistics (even if it is not used in\nplan)?\n\n2009/6/2 Robert Haas <[email protected]>\nOn Mon, Jun 1, 2009 at 4:53 PM, Anne Rosset <[email protected]> wrote:\n>> On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset <[email protected]> wrote:\n>>> SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592 (1\n>>> row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum --------\n>>> 122412 (1 row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted\n>>> =\n>>> 'f'; sum ----- 71 (1 row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted\n>>> =\n>>> 't'; sum -------- 122341 (1 row)\n>\n> The item table has 2324829 rows\n\nSo 1824592/2324829 = 78.4% of the rows have is_deleted = false, and\n0.06709% of the rows have the relevant folder_id.  Therefore the\nplanner assumes that there will be 2324829 * 78.4% * 0.06709% =~\n96,000 rows that satisfy both criteria (the original explain had\n97,000; there's some variability due to the fact that the analyze only\nsamples a random subset of pages), but the real number is 71, leading\nit to make a very bad decision.  This is a classic \"hidden\ncorrelation\" problem, where two columns are correlated but the planner\ndoesn't notice, and you get a terrible plan.\n\nUnfortunately, I'm not aware of any real good solution to this\nproblem.  The two obvious approaches are multi-column statistics and\nplanner hints; PostgreSQL supports neither.  How about partial index (create index idx on item(folder_id) where not is_deleted)? Won't it have required statistics (even if it is not used in plan)?", "msg_date": "Tue, 2 Jun 2009 13:20:17 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "On Jun 2, 2009, at 6:20 AM, Віталій Тимчишин \n<[email protected]> wrote:\n\n>\n>\n> 2009/6/2 Robert Haas <[email protected]>\n> On Mon, Jun 1, 2009 at 4:53 PM, Anne Rosset <[email protected]> \n> wrote:\n> >> On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset <[email protected]> \n> wrote:\n> >>> SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592 (1\n> >>> row)\n> >>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> >>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum \n> --------\n> >>> 122412 (1 row)\n> >>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> >>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND \n> is_deleted\n> >>> =\n> >>> 'f'; sum ----- 71 (1 row)\n> >>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n> >>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND \n> is_deleted\n> >>> =\n> >>> 't'; sum -------- 122341 (1 row)\n> >\n> > The item table has 2324829 rows\n>\n> So 1824592/2324829 = 78.4% of the rows have is_deleted = false, and\n> 0.06709% of the rows have the relevant folder_id. Therefore the\n> planner assumes that there will be 2324829 * 78.4% * 0.06709% =~\n> 96,000 rows that satisfy both criteria (the original explain had\n> 97,000; there's some variability due to the fact that the analyze only\n> samples a random subset of pages), but the real number is 71, leading\n> it to make a very bad decision. This is a classic \"hidden\n> correlation\" problem, where two columns are correlated but the planner\n> doesn't notice, and you get a terrible plan.\n>\n> Unfortunately, I'm not aware of any real good solution to this\n> problem. The two obvious approaches are multi-column statistics and\n> planner hints; PostgreSQL supports neither.\n>\n> How about partial index (create index idx on item(folder_id) where \n> not is_deleted)? Won't it have required statistics (even if it is \n> not used in plan)?\n\nI tried that; doesn't seem to work.\n\n...Robert\nOn Jun 2, 2009, at 6:20 AM, Віталій Тимчишин <[email protected]> wrote:2009/6/2 Robert Haas <[email protected]>\nOn Mon, Jun 1, 2009 at 4:53 PM, Anne Rosset <[email protected]> wrote:\n>> On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset <[email protected]> wrote:\n>>> SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592 (1\n>>> row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum --------\n>>> 122412 (1 row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted\n>>> =\n>>> 'f'; sum ----- 71 (1 row)\n>>> SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>> </sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted\n>>> =\n>>> 't'; sum -------- 122341 (1 row)\n>\n> The item table has 2324829 rows\n\nSo 1824592/2324829 = 78.4% of the rows have is_deleted = false, and\n0.06709% of the rows have the relevant folder_id.  Therefore the\nplanner assumes that there will be 2324829 * 78.4% * 0.06709% =~\n96,000 rows that satisfy both criteria (the original explain had\n97,000; there's some variability due to the fact that the analyze only\nsamples a random subset of pages), but the real number is 71, leading\nit to make a very bad decision.  This is a classic \"hidden\ncorrelation\" problem, where two columns are correlated but the planner\ndoesn't notice, and you get a terrible plan.\n\nUnfortunately, I'm not aware of any real good solution to this\nproblem.  The two obvious approaches are multi-column statistics and\nplanner hints; PostgreSQL supports neither.  How about partial index (create index idx on item(folder_id) where not is_deleted)? Won't it have required statistics (even if it is not used in plan)?\nI tried that; doesn't seem to work....Robert", "msg_date": "Tue, 2 Jun 2009 07:48:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "Robert Haas wrote:\n\n>On Mon, Jun 1, 2009 at 4:53 PM, Anne Rosset <[email protected]> wrote:\n> \n>\n>>>On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset <[email protected]> wrote:\n>>> \n>>>\n>>>>SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum --------- 1824592 (1\n>>>>row)\n>>>>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>'; sum --------\n>>>>122412 (1 row)\n>>>>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted\n>>>>=\n>>>>'f'; sum ----- 71 (1 row)\n>>>>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641\n>>>></sf/sfmain/do/go/tracker3641?returnUrlKey=1243878161701>' AND is_deleted\n>>>>=\n>>>>'t'; sum -------- 122341 (1 row)\n>>>> \n>>>>\n>>The item table has 2324829 rows\n>> \n>>\n>\n>So 1824592/2324829 = 78.4% of the rows have is_deleted = false, and\n>0.06709% of the rows have the relevant folder_id. Therefore the\n>planner assumes that there will be 2324829 * 78.4% * 0.06709% =~\n>96,000 rows that satisfy both criteria (the original explain had\n>97,000; there's some variability due to the fact that the analyze only\n>samples a random subset of pages), but the real number is 71, leading\n>it to make a very bad decision. This is a classic \"hidden\n>correlation\" problem, where two columns are correlated but the planner\n>doesn't notice, and you get a terrible plan.\n>\n>Unfortunately, I'm not aware of any real good solution to this\n>problem. The two obvious approaches are multi-column statistics and\n>planner hints; PostgreSQL supports neither. There are various\n>possible hacks that aren't very satisfying, such as:\n>\n>1. Redesign the application to put the deleted records in a separate\n>table from the non-deleted records. But if the deleted records still\n>have child records in other tables, this won't fly due to foreign key\n>problems.\n>\n>2. Inserting a clause that the optimizer doesn't understand to fool it\n>into thinking that the scan on the item table is much more selective\n>than is exactly the case. I think adding (item.id + 0) = (item.id +\n>0) to the WHERE clause will work; the planner will brilliantly\n>estimate the selectivity of that expression as one in 200. The\n>problem with this is that it will likely lead to a better plan in this\n>particular case, but for other folder_ids it may make things worse.\n>There's also no guarantee that a future version of PostgreSQL won't be\n>smart enough to see through this type of sophistry, though I think\n>you're safe as far as the forthcoming 8.4 release is concerned.\n>\n>3. A hack that makes me gag, but it actually seems to work...\n>\n>CREATE OR REPLACE FUNCTION item_squash(varchar, boolean) RETURNS varchar[] AS $$\n>SELECT array[$1, CASE WHEN $2 THEN 'true' ELSE 'false' END]\n>$$ LANGUAGE sql IMMUTABLE;\n>\n>CREATE INDEX item_squash_idx ON item (item_squash(folder_id, is_deleted));\n>\n>...and then remove \"folder_id = XXX AND is_deleted = YYY\" from your\n>query and substitute \"item_squash(folder_id, is_deleted) =\n>item_squash(XXX, YYY)\". The expresson index forces the planner to\n>gather statistics on the distribution of values for that expression,\n>and if you then write a query using that exact same expression the\n>planner can take advantage of it.\n>\n>...Robert\n> \n>\nThanks a lot Robert. Not sure how we will tackle this but at least now \nwe have an explanation. From what I read, results won't improved in 8.4. \nIs that correct?\n\nThanks,\nAnne\n", "msg_date": "Tue, 02 Jun 2009 08:16:09 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected query plan results" }, { "msg_contents": "On Tue, Jun 2, 2009 at 11:16 AM, Anne Rosset <[email protected]> wrote:\n> Thanks a lot Robert. Not sure how we will tackle this but at least now we\n> have an explanation. From what I read, results won't improved in 8.4. Is\n> that correct?\n\nYes, that's correct.\n\nGood luck...\n\n...Robert\n", "msg_date": "Tue, 2 Jun 2009 11:41:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected query plan results" } ]
[ { "msg_contents": "autovacuum has been running on 2 tables for > 5 hours. There tables are \nnot huge (see below). For the past ~1 hour, I've shut off all other \nactivity on this database. The other table being vacuumed has more rows \n(1897810). Anyone have any ideas about why this is taking so long?\n\nThanks,\nBrian\n\n\n[root@rdl64xeoserv01 log]# fgrep autov /var/lib/pgsql/data/postgresql.conf\nautovacuum = on # enable autovacuum subprocess?\nautovacuum_naptime = 60s # time between autovacuum runs, \nin secs\nautovacuum_vacuum_threshold = 200 # min # of tuple updates before\nautovacuum_analyze_threshold = 50 # min # of tuple updates before\nautovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before\nautovacuum_analyze_scale_factor = 0.1 # fraction of rel size before\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # autovac, -1 means use\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n\n\n\nWelcome to psql 8.3.5, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help with psql commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ncemdb=# select procpid,query_start,current_query from pg_stat_activity;\n procpid | query_start | \ncurrent_query\n---------+-------------------------------+-----------------------------------------------------------------\n 24866 | 2009-05-29 13:50:11.251397-07 | autovacuum: VACUUM \npublic.ts_user_sessions_map\n 24869 | 2009-05-29 11:46:54.221713-07 | autovacuum: VACUUM ANALYZE \npublic.ts_stats_transet_user_daily\n 24872 | 2009-05-29 11:31:28.324954-07 | autovacuum: VACUUM ANALYZE \npublic.ts_stats_transet_user_weekly\n 28097 | 2009-05-29 15:58:49.24832-07 | select \nprocpid,query_start,current_query from pg_stat_activity;\n(4 rows)\n\ncemdb=# select count(*) from ts_stats_transet_user_daily;\n count\n--------\n 558321\n(1 row)\n\ncemdb=# select count(*) from ts_stats_transet_user_weekly;\n count\n--------\n 333324\n(1 row)\n\ncemdb=# select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c \njoin pg_locks l on c.oid=l.relation order by l.pid;\n oid | relname | \npid | mode | granted\n----------+-------------------------------------------------------+-------+--------------------------+---------\n 26612062 | ts_user_sessions_map | \n24866 | ShareUpdateExclusiveLock | t\n 26613644 | ts_user_sessions_map_interimsessionidindex | \n24866 | RowExclusiveLock | t\n 26613645 | ts_user_sessions_map_sessionidindex | \n24866 | RowExclusiveLock | t\n 26612846 | ts_user_sessions_map_appindex | \n24866 | RowExclusiveLock | t\n 26612417 | ts_user_sessions_map_pkey | \n24866 | RowExclusiveLock | t\n 27208308 | ts_stats_transet_user_daily_userindex | \n24869 | RowExclusiveLock | t\n 27208305 | ts_stats_transet_user_daily_transetincarnationidindex | \n24869 | RowExclusiveLock | t\n 27208310 | ts_stats_transet_user_daily_yearindex | \n24869 | RowExclusiveLock | t\n 27208307 | ts_stats_transet_user_daily_userincarnationidindex | \n24869 | RowExclusiveLock | t\n 27208302 | ts_stats_transet_user_daily_lastaggregatedrowindex | \n24869 | RowExclusiveLock | t\n 27208309 | ts_stats_transet_user_daily_weekindex | \n24869 | RowExclusiveLock | t\n 26612320 | ts_stats_transet_user_daily_pkey | \n24869 | RowExclusiveLock | t\n 27208306 | ts_stats_transet_user_daily_transetindex | \n24869 | RowExclusiveLock | t\n 26611722 | ts_stats_transet_user_daily | \n24869 | ShareUpdateExclusiveLock | t\n 27208303 | ts_stats_transet_user_daily_monthindex | \n24869 | RowExclusiveLock | t\n 27208304 | ts_stats_transet_user_daily_starttimeindex | \n24869 | RowExclusiveLock | t\n 27208300 | ts_stats_transet_user_daily_dayindex | \n24869 | RowExclusiveLock | t\n 27208301 | ts_stats_transet_user_daily_hourindex | \n24869 | RowExclusiveLock | t\n 26612551 | ts_stats_transet_user_weekly_lastaggregatedrowindex | \n24872 | RowExclusiveLock | t\n 26612558 | ts_stats_transet_user_weekly_yearindex | \n24872 | RowExclusiveLock | t\n 26612326 | ts_stats_transet_user_weekly_pkey | \n24872 | RowExclusiveLock | t\n 26612554 | ts_stats_transet_user_weekly_transetindex | \n24872 | RowExclusiveLock | t\n 26612555 | ts_stats_transet_user_weekly_userincarnationidindex | \n24872 | RowExclusiveLock | t\n 26611743 | ts_stats_transet_user_weekly | \n24872 | ShareUpdateExclusiveLock | t\n 26612556 | ts_stats_transet_user_weekly_userindex | \n24872 | RowExclusiveLock | t\n 26612553 | ts_stats_transet_user_weekly_starttimeindex | \n24872 | RowExclusiveLock | t\n 26612557 | ts_stats_transet_user_weekly_weekindex | \n24872 | RowExclusiveLock | t\n 26612550 | ts_stats_transet_user_weekly_hourindex | \n24872 | RowExclusiveLock | t\n 26612552 | ts_stats_transet_user_weekly_monthindex | \n24872 | RowExclusiveLock | t\n 26612549 | ts_stats_transet_user_weekly_dayindex | \n24872 | RowExclusiveLock | t\n 2663 | pg_class_relname_nsp_index | \n28097 | AccessShareLock | t\n 10969 | pg_locks | \n28097 | AccessShareLock | t\n 1259 | pg_class | \n28097 | AccessShareLock | t\n 2662 | pg_class_oid_index | \n28097 | AccessShareLock | t\n(34 rows)\n\n", "msg_date": "Fri, 29 May 2009 16:43:28 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum hung?" }, { "msg_contents": "Brian Cox wrote:\n> autovacuum has been running on 2 tables for > 5 hours. There tables are \n> not huge (see below). For the past ~1 hour, I've shut off all other \n> activity on this database. The other table being vacuumed has more rows \n> (1897810). Anyone have any ideas about why this is taking so long?\n\nWhat's vacuum_cost_delay?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 29 May 2009 20:02:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum hung?" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> autovacuum has been running on 2 tables for > 5 hours. There tables are \n> not huge (see below). For the past ~1 hour, I've shut off all other \n> activity on this database. The other table being vacuumed has more rows \n> (1897810). Anyone have any ideas about why this is taking so long?\n\nAre those processes actually doing anything, or just waiting? strace\nor local equivalent would be the most conclusive check.\n\n> cemdb=# select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c \n> join pg_locks l on c.oid=l.relation order by l.pid;\n\nThis query isn't very helpful because it fails to show locks that are\nnot directly associated with tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 May 2009 11:58:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum hung? " } ]
[ { "msg_contents": "Alvaro Herrera [[email protected]] wrote:\n> What's vacuum_cost_delay?\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\nso, whatever the default happens to be.\n\nThanks,\nBrian\n", "msg_date": "Fri, 29 May 2009 17:04:21 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum hung?" } ]
[ { "msg_contents": "I have 3 servers, all with identical databases, and each performing\nvery differently for the same queries.\n\nwww3 is my fastest, www2 is the worst, and www1 is in the middle...\neven though www2 has more ram, faster CPU and faster drives (by far),\nand is running a newer version of postgres. I have been reluctant to\npost because I know it's something that I'm doing wrong in the\nsettings or something that I should be able to figure out.\n\nLast year at this time www2 was the fastest... in fact, I bought the\nmachine to be my \"primary\" server, and it performed as such.... with\nthe striped volumes and higher RAM, it outpaced the other 2 in every\nquery. It has, over time, \"degenerated\" to being so slow it\nfrequently has to be taken out of the load-balance set. The only\nmajor changes to the sever have been \"yum update (or ports upgrade)\"\nto the newer releases ... over time.\n\nThe query planner \"knows\" about the problem, but I'm not sure *why*\nthere's a difference... since the tables all have the same data ...\nloaded from a dump nightly.\n\nThe planner shows a different number of \"rows\" even though the items\ntable has 22680 rows in all 3 instances. I ran a vacuum analyze just\nbefore these runs hoping to get them all into a similar \"clean\" state.\n\nThe difference is outlined below, with the query planner output from a\ntable-scan query that greatly exaggerates the differences in\nperformance, along with some info about the configuration and platform\ndifferences.\n\nQUERY: explain select count(*) from items where name like '%a%'\n\nwww3: psql (PostgreSQL) 8.1.14\nwww3: Linux www3 2.6.23.17-88.fc7 #1 SMP Thu May 15 00:02:29 EDT 2008\nx86_64 x86_64 x86_64 GNU/Linux\n\nwww3: Mem: 1996288k total, 1537576k used, 458712k free, 23124k buffers\nwww3: Swap: 0k total, 0k used, 0k free, 1383208k cached\n\nwww3: shared_buffers = 10000 # min 16 or\nmax_connections*2, 8KB each\n\nwww3: QUERY PLAN\nwww3: ------------------------------------------------------------------\nwww3: Aggregate (cost=3910.07..3910.08 rows=1 width=0)\nwww3: -> Seq Scan on items (cost=0.00..3853.39 rows=22671 width=0)\nwww3: Filter: (name ~~ '%a%'::text)\nwww3: (3 rows)\nwww3:\n\nwww1: psql (PostgreSQL) 8.1.17\nwww1: Linux www1 2.6.26.8-57.fc8 #1 SMP Thu Dec 18 18:59:49 EST 2008\nx86_64 x86_64 x86_64 GNU/Linux\n\nwww1: Mem: 1019376k total, 973064k used, 46312k free, 27084k buffers\nwww1: Swap: 1959888k total, 17656k used, 1942232k free, 769776k cached\n\nwww1: shared_buffers = 6000 # min 16 or\nmax_connections*2, 8KB each\n\nwww1: QUERY PLAN\nwww1: ------------------------------------------------------------------\nwww1: Aggregate (cost=5206.20..5206.21 rows=1 width=0)\nwww1: -> Seq Scan on items (cost=0.00..5149.50 rows=22680 width=0)\nwww1: Filter: (name ~~ '%a%'::text)\nwww1: (3 rows)\nwww1:\n\nwww2: psql (PostgreSQL) 8.2.13\nwww2: FreeBSD www2 6.3-RELEASE-p7 FreeBSD 6.3-RELEASE-p7 #0: Sun Dec\n21 03:24:04 UTC 2008\[email protected]:/usr/obj/usr/src/sys/SMP amd64\n\nwww2: Mem: 57M Active, 1078M Inact, 284M Wired, 88M Cache, 213M Buf, 10M Free\nwww2: Swap: 4065M Total, 144K Used, 4065M Free\n\nwww2: shared_buffers = 360MB # min 128kB or\nmax_connections*16kB\n\nwww2: QUERY PLAN\nwww2: ------------------------------------------------------------------\nwww2: Aggregate (cost=17659.45..17659.46 rows=1 width=0)\nwww2: -> Seq Scan on items (cost=0.00..17652.24 rows=2886 width=0)\nwww2: Filter: (name ~~ '%a%'::text)\nwww2: (3 rows)\nwww2:\n", "msg_date": "Sun, 31 May 2009 05:59:35 -0400", "msg_from": "Erik Aronesty <[email protected]>", "msg_from_op": true, "msg_subject": "degenerate performance on one server of 3" }, { "msg_contents": "Erik Aronesty <[email protected]> writes:\n> I have 3 servers, all with identical databases, and each performing\n> very differently for the same queries.\n\nI'm betting on varying degrees of table bloat. Have you tried vacuum\nfull, cluster, etc?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 May 2009 10:52:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3 " }, { "msg_contents": "Tom Lane wrote:\n> Erik Aronesty <[email protected]> writes:\n>> I have 3 servers, all with identical databases, and each performing\n>> very differently for the same queries.\n> \n> I'm betting on varying degrees of table bloat. Have you tried vacuum\n> full, cluster, etc?\n\nOr, if you have been using VACUUM FULL, try REINDEXing the tables,\nbecause it could easily be index bloat. Clustering the table will take\ncare of index bloat as well as table bloat.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 01 Jun 2009 11:28:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> Tom Lane wrote:\n>> I'm betting on varying degrees of table bloat. Have you tried vacuum\n>> full, cluster, etc?\n\n> Or, if you have been using VACUUM FULL, try REINDEXing the tables,\n> because it could easily be index bloat. Clustering the table will take\n> care of index bloat as well as table bloat.\n\nIndex bloat wouldn't explain the slow-seqscan behavior the OP was\ncomplaining of. Still, you're right that if the tables are bloated\nthen their indexes probably are too ... and that VACUUM FULL alone\nwill not fix that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 May 2009 23:40:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3 " }, { "msg_contents": "it was all vacuum full...thanks\n\nthe other 2 servers truncate and reload that table from time to time\n... IE: they are always vacuumed\n\nas the \"master\" ... that server never does it... hence the bloat\n\nbut why wasn't autovac enough to reclaim at least *most* of the space?\n that table *does* get updated every day... but rows are not\noverwritten, just edited. it seems that most of the pages should be\n\"reused\" via autovac ....\n\n\n\nOn Sun, May 31, 2009 at 11:40 PM, Tom Lane <[email protected]> wrote:\n> Craig Ringer <[email protected]> writes:\n>> Tom Lane wrote:\n>>> I'm betting on varying degrees of table bloat.  Have you tried vacuum\n>>> full, cluster, etc?\n>\n>> Or, if you have been using VACUUM FULL, try REINDEXing the tables,\n>> because it could easily be index bloat. Clustering the table will take\n>> care of index bloat as well as table bloat.\n>\n> Index bloat wouldn't explain the slow-seqscan behavior the OP was\n> complaining of.  Still, you're right that if the tables are bloated\n> then their indexes probably are too ... and that VACUUM FULL alone\n> will not fix that.\n>\n>                        regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 1 Jun 2009 00:12:57 -0400", "msg_from": "Erik Aronesty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "Erik Aronesty <[email protected]> writes:\n> but why wasn't autovac enough to reclaim at least *most* of the space?\n\nAutovac isn't meant to reclaim major amounts of bloat; it's more in the\nline of trying to prevent it from happening in the first place. To\nreclaim bloat it would have to execute VACUUM FULL, or some other\noperation that requires exclusive table lock, which doesn't seem like\na good idea for an automatic background operation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Jun 2009 10:06:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3 " }, { "msg_contents": "I think, perhaps, autovac wasn't running on that machine.\n\nIs there any way to check to see if it's running?\n\nI have enabled all the options , and I know it's running on my other\nservers because I see\n\nLOG: autovacuum.... entries (a profusion of them)\n\nI suspect, perhaps, that it's just not showing up in the log since my\n8.2 BSD box came with different settings by default.\n\ncurrent settings:\n\nautovacuum = on\nstats_start_collector = on # needed for block or row stats\nstats_row_level = on\nlog_min_error_statement = error\nlog_min_messages = notice\nlog_destination = 'syslog'\nclient_min_messages = notice\n\n....should be enought to get it going and for me to see it right? not\nsure which setting controls logging of autovac, nor am i sure of a way\nto *ask* the server if autovac is running.\n\nOn Mon, Jun 1, 2009 at 10:06 AM, Tom Lane <[email protected]> wrote:\n> Erik Aronesty <[email protected]> writes:\n>> but why wasn't autovac enough to reclaim at least *most* of the space?\n>\n> Autovac isn't meant to reclaim major amounts of bloat; it's more in the\n> line of trying to prevent it from happening in the first place.  To\n> reclaim bloat it would have to execute VACUUM FULL, or some other\n> operation that requires exclusive table lock, which doesn't seem like\n> a good idea for an automatic background operation.\n>\n>                        regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 3 Jun 2009 15:30:30 -0400", "msg_from": "Erik Aronesty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "Erik Aronesty <[email protected]> writes:\n> I think, perhaps, autovac wasn't running on that machine.\n> Is there any way to check to see if it's running?\n\n> I have enabled all the options , and I know it's running on my other\n> servers because I see\n\n> LOG: autovacuum.... entries (a profusion of them)\n\n> I suspect, perhaps, that it's just not showing up in the log since my\n> 8.2 BSD box came with different settings by default.\n\n8.2 has far crummier support for logging what autovacuum is doing than\n8.3 does :-(. The settings you show should mean that it's running, but\nthe only way to check specifically is to crank log_min_messages way up,\nwhich will clutter your log with a lot of useless noise along with\nautovacuum's messages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Jun 2009 17:15:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3 " }, { "msg_contents": "Erik Aronesty wrote:\n> I think, perhaps, autovac wasn't running on that machine.\n> \n> Is there any way to check to see if it's running?\n> \n\nsince it looks like stats are on too....\n\nhttp://www.network-theory.co.uk/docs/postgresql/vol3/ViewingCollectedStatistics.html\n\nread the entry on pg_stat_all_tables\n", "msg_date": "Wed, 03 Jun 2009 22:35:19 -0400", "msg_from": "Reid Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "> read the entry on pg_stat_all_tables\n\nyeah, it's running ... vacuum'ed last night\n\nit's odd, to me, that the performance would degrade so extremely\n(noticeably) over the course of one year on a table which has few\ninsertions, no deletions,and daily updates of an integer non null\ncolumn (stock level).\n\nis there some way to view the level of \"bloat that needs full\" in each\ntable, so i could write a script that alerts me to the need of a\n\"vacuum full\" without waiting for random queries to \"get slow\"?\n\nlooking at the results of the \"bloat query\", i still can't see how to\nknow whether bloat is getting bad in an objective manner.\n\n http://pgsql.tapoueh.org/site/html/news/20080131.bloat.html\n\non the machines that perform well 30MB of bloat seems to be fine, and\ni don't knwo what the badly performing table's bloat was, since i\nalready vac'ed it.\n\n.......\n\nthere is one table i have with 2GB of bloat ... but it's performance\n(since all querys are on a clustered index) is more than adequate.\nalso, it's so big i'm afraid my server would be down for 24 hours on\nthat on vacuum\n\nit's a rolling \"cookie table\" with millions of random-id'ed entries\nthat expire after a few months ... i think i'm going to copy the most\nrecent 6 months worth of rows to a new table, then just drop the old\none..... seems easier to me.than the scary unknown of running \"vaccum\nfull\", and then i won't have to worry about the system being down on a\ntable lock.\n\nSeems like \"VACUUM FULL\" could figure out to do that too depending on\nthe bloat-to-table-size ratio ...\n\n - copy all rows to new table\n - lock for a millisecond while renaming tables\n - drop old table.\n\nLocking a whole table for a very long time is scary for admins.\n\n\n- erik\n", "msg_date": "Thu, 4 Jun 2009 07:31:44 -0400", "msg_from": "Erik Aronesty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "On Thu, Jun 4, 2009 at 7:31 AM, Erik Aronesty <[email protected]> wrote:\n> Seems like \"VACUUM FULL\" could figure out to do that too depending on\n> the bloat-to-table-size ratio ...\n>\n>   - copy all rows to new table\n>   - lock for a millisecond while renaming tables\n>   - drop old table.\n\nYou'd have to lock the table at least against write operations during\nthe copy; otherwise concurrent changes might be lost.\n\nAIUI, this is pretty much what CLUSTER does, and I've heard that it\nworks as well or better as VACUUM FULL for bloat reclamation.\nHowever, it's apparently still pessimal:\nhttp://archives.postgresql.org/pgsql-hackers/2008-08/msg01371.php (I\nhad never heard this word before Greg Stark used it in this email, but\nit's a great turn of phrase, so I'm reusing it.)\n\n> Locking a whole table for a very long time is scary for admins.\n\nAgreed. It would be nice if we had some kind of \"incremental full\"\nvacuum that would run for long enough to reclaim a certain number of\npages and then exit. Then you could clean up this kind of problem\nincrementally instead of in one shot. It would be even nicer if the\nlock strength could be reduced, but I'm guessing that's not easy to do\nor someone would have already done it by now. I haven't read the code\nmyself.\n\n...Robert\n", "msg_date": "Thu, 4 Jun 2009 09:16:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "\nOn 6/4/09 4:31 AM, \"Erik Aronesty\" <[email protected]> wrote:\n\n>> read the entry on pg_stat_all_tables\n> \n> yeah, it's running ... vacuum'ed last night\n> \n> it's odd, to me, that the performance would degrade so extremely\n> (noticeably) over the course of one year on a table which has few\n> insertions, no deletions,and daily updates of an integer non null\n> column (stock level).\n> \n> is there some way to view the level of \"bloat that needs full\" in each\n> table, so i could write a script that alerts me to the need of a\n> \"vacuum full\" without waiting for random queries to \"get slow\"?\n> \n> looking at the results of the \"bloat query\", i still can't see how to\n> know whether bloat is getting bad in an objective manner.\n> \n> http://pgsql.tapoueh.org/site/html/news/20080131.bloat.html\n> \n> on the machines that perform well 30MB of bloat seems to be fine, and\n> i don't knwo what the badly performing table's bloat was, since i\n> already vac'ed it.\n> \n\nUpdates require space as well, the full MVCC process requires that the\nvalues for all open transactions exist, so updates are not overwrites, but\ncopies (of the whole tuple in the worst case, of just the column(s) in the\nbest).\n\nFor heavily updated tables, adjusting the table (and maybe index) fillfactor\nwill help prevent bloat, by adding a constant amount of extra space for temp\ndata for updates.\n\nSee ALTER TABLE and CREATE TABLE (and the Index variants).\n\nALTER TABLE foo SET (fillfactor=90);\n\nThis will leave on average, 10% of every 8k block empty and allow updates to\ncolumns to more likely live within the same block.\n\nIndexes have default fillfactor set to 90, I believe.\n\n\n> .......\n> \n> there is one table i have with 2GB of bloat ... but it's performance\n> (since all querys are on a clustered index) is more than adequate.\n> also, it's so big i'm afraid my server would be down for 24 hours on\n> that on vacuum\n> \n> it's a rolling \"cookie table\" with millions of random-id'ed entries\n> that expire after a few months ... i think i'm going to copy the most\n> recent 6 months worth of rows to a new table, then just drop the old\n> one..... seems easier to me.than the scary unknown of running \"vaccum\n> full\", and then i won't have to worry about the system being down on a\n> table lock.\n\nCreating a new table as a select from the old and renaming, OR doing a\nCLUSTER and REINDEX is almost always faster than VACUUM FULL for such large\ntables. But there are different implications on how long other queries are\nlocked out of access to the table. CLUSTER will generally lock out other\nqueries for a long time, but the end result (especially combined with a\nreasonable fillfactor setting) ends up best for long term performance and\nreduction in bloat.\n\n\n> Seems like \"VACUUM FULL\" could figure out to do that too depending on\n> the bloat-to-table-size ratio ...\n> \n> - copy all rows to new table\n> - lock for a millisecond while renaming tables\n> - drop old table.\n> \n> Locking a whole table for a very long time is scary for admins.\n> \n\nYou can do the above manually in a single transaction, however any updates\nor inserts during that time may be lost.\n\n\n> \n> - erik\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 4 Jun 2009 10:34:23 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "\n\n\nOn 6/4/09 6:16 AM, \"Robert Haas\" <[email protected]> wrote:\n\n> On Thu, Jun 4, 2009 at 7:31 AM, Erik Aronesty <[email protected]> wrote:\n>> Seems like \"VACUUM FULL\" could figure out to do that too depending on\n>> the bloat-to-table-size ratio ...\n>> \n>>   - copy all rows to new table\n>>   - lock for a millisecond while renaming tables\n>>   - drop old table.\n> \n> You'd have to lock the table at least against write operations during\n> the copy; otherwise concurrent changes might be lost.\n> \n> AIUI, this is pretty much what CLUSTER does, and I've heard that it\n> works as well or better as VACUUM FULL for bloat reclamation.\n> However, it's apparently still pessimal:\n> http://archives.postgresql.org/pgsql-hackers/2008-08/msg01371.php (I\n> had never heard this word before Greg Stark used it in this email, but\n> it's a great turn of phrase, so I'm reusing it.)\n> \n\nInteresting, I suppose a race between VACUUM FULL and CLUSTER will depend a\nlot on the index and how much of the table already exists in RAM.\n\nIf the index is in RAM, and most of the table is, CLUSTER will be rather\nfast. \n\n\n>> Locking a whole table for a very long time is scary for admins.\n> \n> Agreed. It would be nice if we had some kind of \"incremental full\"\n> vacuum that would run for long enough to reclaim a certain number of\n> pages and then exit. Then you could clean up this kind of problem\n> incrementally instead of in one shot. It would be even nicer if the\n> lock strength could be reduced, but I'm guessing that's not easy to do\n> or someone would have already done it by now. I haven't read the code\n> myself.\n> \n> ...Robert\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 4 Jun 2009 11:10:35 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "> See ALTER TABLE and CREATE TABLE  (and the Index variants).\n>\n> ALTER TABLE foo SET (fillfactor=90);\n\nI'll try that.\n\n> This will leave on average, 10% of every 8k block empty and allow updates to\n> columns to more likely live within the same block.\n\nGood for the items table.\n\nProbably bad for the cookies table, with 6 million rows, and thousands\nof inserts and deletes every day, but few updates.\n\nMaybe I should have another way of doing it. That table gets\nbloated fast. A vacuum full takes 3 and half hours - which would be\nan unacceptable amount of downtime if I didn't have working mirrors of\neverything.\n\n> Creating a new table as a select from the old and renaming, OR doing a\n> CLUSTER and REINDEX is almost always faster than VACUUM FULL for such large\n> tables.  But there are different implications on how long other queries are\n> locked out of access to the table.  CLUSTER will generally lock out other\n> queries for a long time, but the end result (especially combined with a\n> reasonable fillfactor setting) ends up best for long term performance and\n> reduction in bloat.\n\nI'll try it on the other mirror server, which has the same specs and\nsize, see if CLUSTER/REINDEX is faster.\n\n>>    - copy all rows to new table\n>>    - lock for a millisecond while renaming tables\n>>    - drop old table.\n>>\n>> Locking a whole table for a very long time is scary for admins.\n>>\n>\n> You can do the above manually in a single transaction, however any updates\n> or inserts during that time may be lost.\n\nPostgres can have multiple row versions around for transactions, so\nfor a lockless vacuum full to work, some row versions would have to be\nin the \"new table\". I think that could be done at the expense of some\nperformance degradation, as you'd have to figure out which table to\nlook at (or reads... new one.... nothing there.... ok then old\none...., for copies... there's an update there... put the copy \"under\"\nit), some wacky logic like that.\n\nI don't know postgres's internals well enough to do it for \"all\ncases\", but I know my own DB well enought to get it to work for me.\nHave 2 tables with triggered timestamps, then juggling of the queries\nthat hit the tables (check table a and table b, use the row with newer\ntimestamp for reads, meanwhile a is copying to b, but not overwriting\nnewer rows....something like that).\n\nNot sure whether I'd rather have a 7-hour performance degraded\n\"table-copy\" (which would reindex and recluster too) or a 3.5 hour\ntable-locked vacuum (which doesn't reindex or re-cluster).\n", "msg_date": "Fri, 5 Jun 2009 21:50:22 -0400", "msg_from": "Erik Aronesty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: degenerate performance on one server of 3" }, { "msg_contents": "On Thu, Jun 4, 2009 at 7:31 AM, Erik Aronesty<[email protected]> wrote:\n> is there some way to view the level of \"bloat that needs full\" in each\n> table, so i could write a script that alerts me to the need of a\n> \"vacuum full\"  without waiting for random queries to \"get slow\"?\n>\n> looking at the results of the \"bloat query\", i still can't see how to\n> know whether bloat is getting bad in an objective manner.\n\nOne other thought on this... I think the main thing to consider is\nbloat as a percentage of table size. When you go to sequential scan\nthe table, a table with as much bloat as data will take twice as long\nto scan, one with twice as much bloat as data will take three times as\nlong to scan, and so on.\n\nIf you're only ever doing index scans, the effect will be less\nnoticeable, but in round figures comparing the amount of bloat to the\namount of data is a good place to start. I usually find 3x is about\nwhere the pain starts to hit. Also, small tables can sometimes\ntolerate a higher percentage of bloat than large ones, because those\ntable scans tend to be fast anyway.\n\nA lot of times bloat happens at one particular time and just never\ngoes away. Leaving an open transaction around for an hour or two can\nbloat all of your tables, and they'll never get de-bloated on their\nown without help. It would be nice if VACUUM had even a little bit of\ncapability for incrementally improving this situation, but currently\nit doesn't. So when you mention running for a year, it's not unlikely\nthat you had one bad day (or several days in a row) when you collected\nall of that bloat, rather than accumulating it gradually over time.\n\n...Robert\n", "msg_date": "Fri, 5 Jun 2009 22:17:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: degenerate performance on one server of 3" } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> \n> Are those processes actually doing anything, or just waiting? strace\n> or local equivalent would be the most conclusive check.\nThese must not have been hung, because they finally completed (after \n10-15 hrs - some time between 11pm and 8am). Question is why does it \ntake so long to do this on such a relatively small table?\n\n> This query isn't very helpful because it fails to show locks that are\n> not directly associated with tables.\nHow can that (locks not directly associated...) be determined?\n\n\nThanks,\nBrian\n", "msg_date": "Sun, 31 May 2009 10:27:11 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum hung?" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> Tom Lane [[email protected]] wrote:\n>> Are those processes actually doing anything, or just waiting? strace\n>> or local equivalent would be the most conclusive check.\n\n> These must not have been hung, because they finally completed (after \n> 10-15 hrs - some time between 11pm and 8am). Question is why does it \n> take so long to do this on such a relatively small table?\n\nThey might have been blocked behind some other process that was sitting\nin an open transaction for some reason. The other likely cause is badly\nchosen autovacuum delay, but I think that was already covered.\n\n>> This query isn't very helpful because it fails to show locks that are\n>> not directly associated with tables.\n\n> How can that (locks not directly associated...) be determined?\n\nDon't assume every row in pg_locks has a join partner in pg_class.\nYou could use an outer join ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 May 2009 13:32:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum hung? " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> They might have been blocked behind some other process that was sitting\n> in an open transaction for some reason. The other likely cause is badly\n> chosen autovacuum delay, but I think that was already covered.\nWell, after I noticed this running for a while, I shutdown the postgres \nport and restarted postgres. The autovacuum of these tables kicked in \npromptly when postgres was back up. I then let them run. So, I don't \nthink that surmise #1 is likely.\nAs for #2, I'm using the default. These tables get updated once a day \nwith each row (potentially) being updated 1-24 times over many minutes \nto a handful of hours. Dp you think it would be better to manually \nvacuum these tables? If so, would it be best to disable autovacuum of \nthem? And while I'm at it, if you disable autovacuum of the master table \nwill that disable it for the actual partitions?\n\n > Don't assume every row in pg_locks has a join partner in pg_class.\n> You could use an outer join ...\nYes, of course. It never occurred that there could be db locks not \nassociated with tables.\n\nThanks,\nBrian\n\n", "msg_date": "Sun, 31 May 2009 12:08:25 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum hung?" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> Dp you think it would be better to manually \n> vacuum these tables? If so, would it be best to disable autovacuum of \n> them? And while I'm at it, if you disable autovacuum of the master table \n> will that disable it for the actual partitions?\n\nNo, no, and no. What would be best is to find out what actually\nhappened. The evidence is gone now, but if you see it again please\ntake a closer look.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 May 2009 15:34:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum hung? " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> No, no, and no. What would be best is to find out what actually\n> happened. The evidence is gone now, but if you see it again please\n> take a closer look.\nOK. You mentioned strace. It's got a lot of options; any in particular \nthat would be useful if this happens again?\n\nBrian\n\n", "msg_date": "Sun, 31 May 2009 13:44:40 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum hung?" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> OK. You mentioned strace. It's got a lot of options; any in particular \n> that would be useful if this happens again?\n\nI'd just do \"strace -p processID\" and watch it for a little while.\nIf it's not hung, you'll see the process issuing kernel calls at\nsome rate or other.\n\nIf it is hung, you'll most likely see something like\n\n\tsemop(...) \n\nand it just sits there. Also, if you see nothing but a series of\nselect()s with varying timeouts, that would suggest a stuck spinlock\n(although I doubt that was happening, as it would eventually timeout\nand report a failure).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 May 2009 17:13:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum hung? " }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> OK. You mentioned strace. It's got a lot of options; any in particular \n> that would be useful if this happens again?\n\nOh, and don't forget the more-complete pg_locks state. We'll want all\nthe columns of pg_locks, not just the ones you showed before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 May 2009 17:14:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum hung? " } ]
[ { "msg_contents": "Having a doubt, we want to vacuum and reindex some 50 most used tables\ndaily on specific time. Is it best to have a function in postgres and call\nit in cron or is there any other good way to do the two process for\nspecified tables at specified time?\n-Arvind S\n\n*\n**\"Many of lifes failure are people who did not realize how close they were\nto success when they gave up.\"\n-Thomas Edison*\n\n Having a doubt, we want to vacuum and reindex some 50\nmost used tables daily on specific time. Is it best to have a function\nin postgres and call it in cron or is there any other good way to do the two process for specified tables at specified time?-Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"\n\n-Thomas Edison", "msg_date": "Mon, 1 Jun 2009 10:56:07 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuuming technique doubt" }, { "msg_contents": "On Sun, May 31, 2009 at 10:26 PM, S Arvind <[email protected]> wrote:\n> Having a doubt, we want to vacuum and reindex some 50 most used tables daily\n> on specific time. Is it best to have a function in postgres and call it in\n> cron or is there any other good way to do the two process for specified\n> tables at specified time?\n\nJust write a SQL script with the appropriate commands and run it using\npsql from cron. Set up your .pgpass and/or pg_hba.conf as\nappropriate.\n\n-Dave\n", "msg_date": "Mon, 1 Jun 2009 05:45:06 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuuming technique doubt" }, { "msg_contents": "On Mon, 1 Jun 2009, S Arvind wrote:\n\n> Having a doubt, we want to vacuum and reindex some 50 most used tables daily on specific time. Is it best to have a function in\n> postgres and call it in cron or is there any other good way to do the two process for specified tables at specified time?\n\nIf you haven't been using VACUUM properly, it's possible to get into a \nposition where you need to REINDEX your tables and go through the sort of \ngiant cleanup step you describe. If you think you need to do that daily, \nthough, you probably should take a look at tuning autovacuum rather than \ntrying to fix the problem manually all the time.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 1 Jun 2009 11:34:53 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuuming technique doubt" }, { "msg_contents": "Hi Smith,\nThe reason why we need it manually is , we don't need any performance drop\nin our production hours. So we figured out the most less usage working time,\nmost freq used tables and want to perform that on daily . so in weekends we\ncan vaccum and reindex entire db.. Is the model is not more efficient Smith?\n\n-Arvind S\n\n*\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison\n*\n\nOn Mon, Jun 1, 2009 at 9:04 PM, Greg Smith <[email protected]> wrote:\n\n> On Mon, 1 Jun 2009, S Arvind wrote:\n>\n> Having a doubt, we want to vacuum and reindex some 50 most used tables\n>> daily on specific time. Is it best to have a function in\n>> postgres and call it in cron or is there any other good way to do the two\n>> process for specified tables at specified time?\n>>\n>\n> If you haven't been using VACUUM properly, it's possible to get into a\n> position where you need to REINDEX your tables and go through the sort of\n> giant cleanup step you describe. If you think you need to do that daily,\n> though, you probably should take a look at tuning autovacuum rather than\n> trying to fix the problem manually all the time.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n\nHi Smith,The reason why we need it manually is , we don't need any  performance drop in our production hours. So we figured out the most less usage working time, most freq used tables and want to perform that  on daily . so in weekends we can vaccum and reindex entire db.. Is the model is not more efficient Smith?\n-Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"-Thomas Edison\nOn Mon, Jun 1, 2009 at 9:04 PM, Greg Smith <[email protected]> wrote:\nOn Mon, 1 Jun 2009, S Arvind wrote:\n\n\nHaving a doubt, we want to vacuum and reindex some 50 most used tables daily on specific time. Is it best to have a function in\npostgres and call it in cron or is there any other good way to do the two process for specified tables at specified time?\n\n\nIf you haven't been using VACUUM properly, it's possible to get into a position where you need to REINDEX your tables and go through the sort of giant cleanup step you describe.  If you think you need to do that daily, though, you probably should take a look at tuning autovacuum rather than trying to fix the problem manually all the time.\n\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD", "msg_date": "Mon, 1 Jun 2009 23:31:37 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuuming technique doubt" }, { "msg_contents": "S Arvind <[email protected]> wrote: \n \n> The reason why we need it manually is , we don't need any \n> performance drop in our production hours. So we figured out the most\n> less usage working time, most freq used tables and want to perform\n> that on daily . so in weekends we can vaccum and reindex entire db..\n \nBy the time you get to your mass reindex the bloat will be harming\nyour performance much more than the autovacuum needs to do. Check the\ndocumentation here:\n \nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST\n \nI hope this helps.\n \n-Kevin\n", "msg_date": "Mon, 01 Jun 2009 19:35:24 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuuming technique doubt" }, { "msg_contents": "On Mon, Jun 1, 2009 at 8:35 PM, Kevin Grittner\n<[email protected]> wrote:\n> S Arvind <[email protected]> wrote:\n>\n>> The reason why we need it manually is , we don't need any\n>> performance drop in our production hours. So we figured out the most\n>> less usage working time, most freq used tables and want to perform\n>> that on daily . so in weekends we can vaccum and reindex entire db..\n>\n> By the time you get to your mass reindex the bloat will be harming\n> your performance much more than the autovacuum needs to do.  Check the\n> documentation here:\n>\n> http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST\n>\n> I hope this helps.\n\nBut before you try that, try just using the default settings and see\nif you actually have a problem.\n\n...Robert\n", "msg_date": "Mon, 1 Jun 2009 20:41:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuuming technique doubt" } ]
[ { "msg_contents": "I have a DB table with 25M rows, ~3K each (i.e. ~75GB), that together with\nmultiple indexes I use (an additional 15-20GB) will not fit entirely in\nmemory (64GB on machine). A typical query locates 300 rows thru an index,\noptionally filters them down to ~50-300 rows using other indexes, finally\nfetching the matching rows. Response times vary between 20ms on a warm DB to\n20 secs on a cold DB. I have two related questions:\n\n1. At any given time how can I check what portion (%) of specific tables and\nindexes is cached in memory?\n\n2. What is the best way to warm up the cache before opening the DB to\nqueries? E.g. \"select *\" forces a sequential scan (~15 minutes on cold DB)\nbut response times following it are still poor. Is there a built-in way to\ndo this instead of via queries?a\n\nThanks, feel free to also reply by email ([email protected]])\n\n-- Shaul\n\nI have a DB table with 25M rows, ~3K each (i.e. ~75GB), that together with multiple indexes I use (an additional 15-20GB) will not fit entirely in memory (64GB on machine). A typical query locates 300 rows thru an index,  optionally filters them down to ~50-300 rows using other indexes, finally fetching the matching rows. Response times vary between 20ms on a warm DB to 20 secs on a cold DB. I have two related questions:\n1. At any given time how can I check what portion (%) of specific tables and indexes is cached in memory?2. What is the best way to warm up the cache before opening the DB to queries? E.g. \"select *\" forces a sequential scan (~15 minutes on cold DB) but response times following it are still poor. Is there a built-in way to do this instead of via queries?a\nThanks, feel free to also reply by email ([email protected]])-- Shaul", "msg_date": "Mon, 1 Jun 2009 14:05:48 +0300", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql cache (memory) performance + how to warm up the cache" }, { "msg_contents": "On Mon, 1 Jun 2009, Shaul Dar wrote:\n\n> 1. At any given time how can I check what portion (%) of specific tables and indexes is cached in memory?\n\nThis is a bit tricky. PostgreSQL caches information in its shared_buffers \ncache, and you can get visibility into that if you install the \ncontrib/pg_buffercache library into your database. I go over the theory \nhere and give some sample queries, including the one you're asking for, in \nmy \"Inside the PostgreSQL Buffer Cache\" presentation at \nhttp://www.westnet.com/~gsmith/content/postgresql/\n\nHowever, in a normal installation, the operating system cache will have a \nsignificant amount of data stored in it as well. Figuring out that is \nmore complicated. The best integrated script I've seen for that so far as \nat http://www.kennygorman.com/wordpress/?p=250 but that's not really \nintegrated into an easy to use tool yet. Improving that is on a couple of \npeople's agendas for the next PostgreSQL release, that's as good as it \ngets for what I'm aware that's already public.\n\n> 2. What is the best way to warm up the cache before opening the DB to queries? E.g. \"select *\" forces a sequential scan (~15 minutes\n> on cold DB) but response times following it are still poor. Is there a built-in way to do this instead of via queries?a\n\nThere is an optimization in PostgreSQL 8.3 and later that keeps sequential \nscans from using large amounts of the PostgreSQL buffer cache, and full \ntable scans don't pull the index pages at all--and those are likely what \nyou really want cached.\n\nWhat you probably want to do is run a query that uses an index \naggressively (confirm this via EXPLAIN) instead. You might be able to get \nthat to happen by selecting everything using an ORDER BY that is expensive \n(from a planner cost perspective), therefore making the indexed scan seem \nmore attractive, but the exact behavior here depends on how you've got \nyour planner parameters setup.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 1 Jun 2009 09:18:04 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql cache (memory) performance + how to warm up the cache" } ]
[ { "msg_contents": "Hi,\n\nI have at column that is a bit array of 16, each bit specifying if a certain\nproperty, out of 16, is present or not. Our typical query select 300\n\"random\" rows (could be located in different blocks) from the table based on\nanother column+index, and then filters them down to ~50 based on this the\nbit field. Currently we have 16 separate indexes built on each bit, and on\nour 25M rows table each index takes about 880MB for a total of 14GB! I would\nhave liked to change this into a single short integer value with a single\nindex, but I don't know if there is a way to search if specific bits are\nset, using a single index? W/o an index this might be overly expensive,\neven as a filter (on selected 300 rows).\n\n(I also saw the thread\nhttp://archives.postgresql.org/pgsql-performance/2007-09/msg00283.php. As I\nsaid we are currently using the same multiple index \"solution\" described in\nhttp://archives.postgresql.org/pgsql-performance/2007-09/msg00283.php). Any\nsuggestions?\n\nThanks!\n\n-- Shaul (Email: [email protected])\n\nHi,I have at column that is a bit array of 16, each bit specifying if a certain property, out of 16, is present or not. Our typical query select 300 \"random\" rows (could be located in different blocks) from the table based on another\ncolumn+index, and then filters them down to ~50 based on this the bit field. Currently we have 16 separate indexes built on each bit, and on our 25M rows table each index takes about 880MB for a total of 14GB! I would have liked to change this into a single short integer value with a single index, but I don't know if there is a way to search if specific bits are set, using a single index?  W/o an index this might be overly expensive, even as a filter (on selected 300 rows).\n(I also saw the thread http://archives.postgresql.org/pgsql-performance/2007-09/msg00283.php. As I said we are currently using the same multiple index \"solution\" described in http://archives.postgresql.org/pgsql-performance/2007-09/msg00283.php). Any suggestions?\nThanks!-- Shaul (Email: [email protected])", "msg_date": "Mon, 1 Jun 2009 18:46:22 +0300", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Using index for bitwise operations?" }, { "msg_contents": "Shaul Dar wrote:\n> Hi,\n> \n> I have at column that is a bit array of 16, each bit specifying if a certain\n> property, out of 16, is present or not. Our typical query select 300\n> \"random\" rows (could be located in different blocks) from the table based on\n> another column+index, and then filters them down to ~50 based on this the\n> bit field.\n[snip]\n > W/o an index this might be overly expensive,\n > even as a filter (on selected 300 rows).\n\nHave you _tried_ just not having an index at all? Since you are only \naccessing a relatively small number of rows to start with, even an \ninfinitely efficient index isn't going to make that much difference. \nCombine that with the fact that you're going to have the indexes \ncompeting with the table for cache space and I'd see how much difference \nit makes just not having it.\n\nFailing that, perhaps have an index on a single bit if there is one you \nalways/mostly check against.\n\nThe relational way to do this would be one or more property tables \njoined to your main table. If the majority of your properties are not \nset then this could be faster too.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 01 Jun 2009 17:17:14 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for bitwise operations?" }, { "msg_contents": "Shaul Dar <[email protected]> writes:\n> I have at column that is a bit array of 16, each bit specifying if a certain\n> property, out of 16, is present or not. Our typical query select 300\n> \"random\" rows (could be located in different blocks) from the table based on\n> another column+index, and then filters them down to ~50 based on this the\n> bit field. Currently we have 16 separate indexes built on each bit, and on\n> our 25M rows table each index takes about 880MB for a total of 14GB!\n\nOuch. One possibility is to replace the bitarray with an integer array\n(k is in the int[] array iff bit k was set in the bitarray) and then use\nthe GIST or GIN indexing capabilities of contrib/intarray. I also seem\nto recall having seen a module that provides GIST indexing for bitops\non plain integers --- have you looked on pgfoundry?\n\nThis isn't necessarily better than what you're doing, as btree indexes\nare a lot better optimized than GIST/GIN. But it would be worth looking\ninto.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Jun 2009 12:18:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for bitwise operations? " }, { "msg_contents": "On Mon, 1 Jun 2009, Shaul Dar wrote:\n> Our typical query select 300 \"random\" rows (could be located in different blocks) from the table based on another\n> column+index, and then filters them down to ~50 based on this the bit field.\n\nSo it seems that you're already using an index to fetch 300 rows from a \nbig table, and then filtering that down to ~50 based on the über-complex \nstuff.\n\nThat's the right way to do it. There isn't really an appropriate place to \nadd another index into this query plan. Filtering 300 rows is peanuts for \nPostgres.\n\nYou quite probably won't get any benefit from having a bitwise index, \nunless you can make a multi-column index with the existing index stuff \nfirst and then the bitwise stuff as a second column. However, that sounds \nlike more effort than benefit.\n\nIf I have my analysis wrong, perhaps you could post your EXPLAIN ANALYSE \nresults so we can see what you mean.\n\nMatthew\n\n-- \n What goes up must come down. Ask any system administrator.", "msg_date": "Tue, 2 Jun 2009 13:48:39 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using index for bitwise operations?" } ]
[ { "msg_contents": "Hi,\n\nI¹m about to set up a large instance on Amazon EC2 to be our DB server.\n\nBefore we switch to using it in production I would like to simulate some\nload on it so that I know what it can handle and so that I can make sure I\nhave the optimal settings in the config file.\n\nWhat is the best strategy out there for doing this? Does anyone know of\nsome resource that talks about doing this?\n\nThanks,\n\nPeter\n\n\n\nBest way to load test a postgresql server\n\n\nHi,\n\nI’m about to set up a large instance on Amazon EC2 to be our DB server.\n\nBefore we switch to using it in production I would like to simulate some load on it so that I know what it can handle and so that I can make sure I have the optimal settings in the config file.\n\nWhat is the best strategy out there for doing this?  Does anyone know of some resource that talks about doing this?\n\nThanks,\n\nPeter", "msg_date": "Mon, 01 Jun 2009 11:55:53 -0400", "msg_from": "\"Peter Sheats\" <[email protected]>", "msg_from_op": true, "msg_subject": "Best way to load test a postgresql server" }, { "msg_contents": "Disclaimer : I'm very much a newbie here!\n\nBut I am on the path in my new job to figure this stuff out as well,\nand went to PG Con here in Ottawa 2 weeks ago and attended quite a few\nlectures on this topic. Have a look at :\n\nhttp://wiki.postgresql.org/wiki/PgCon_2009\n\nAnd in particular \"Database Hardware Benchmarking\" by Greg Smith\nand\n\"Visualizing Postgres\" by Michael Glaesmann\n\"Performance Whack-a-Mole\" by Josh Berkus\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n", "msg_date": "Mon, 1 Jun 2009 12:27:23 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "Hi, \"Peter Sheats\" <[email protected]> writes: > I’m about to \nset up a large instance on Amazon EC2 to be our DB server. > > \nBefore we switch to using it in production I would like to \nsimulate some load on it so that I know what it can handle and so \nthat I can make sure I have the > optimal settings in the config \nfile. > > What is the best strategy out there for doing this? \nDoes anyone know of some resource that talks about doing this? \nI'd recommand having a look at tsung which will be able to replay \na typical application scenario with as many concurrent users as \nyou want to: \nhttp://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php\n http://tsung.erlang-projects.org/\n http://pgfouine.projects.postgresql.org/tsung.html\n\nIf you want to replay your logs at the current production speed \nand\nconcurrency, see Playr.\n https://area51.myyearbook.com/trac.cgi/wiki/Playr\n \nRegards,\n-- \ndim\n", "msg_date": "Tue, 02 Jun 2009 11:26:41 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "On Tue, 02 Jun 2009 05:26:41 -0400, Dimitri Fontaine \n<[email protected]> wrote:\n> I'd recommand having a look at tsung which will be able to replay a \n> typical application scenario with as many concurrent users as you want \n> to: http://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php\n> http://tsung.erlang-projects.org/\n> http://pgfouine.projects.postgresql.org/tsung.html\n\nI am having a look at tsung and not getting very far yet. Have you had \nluck with it and do you really mean as many concurrent users as you want? \nI was hoping to use it to simulate my current load while tuning and making \nimprovements. So far tsung doesn't appear well suited to my needs. I use \npersistent connections; each tsung session uses a new connection. I have \nmultiple applications that have very usage patterns (some web and largely \nidle, some non web and almost saturated); tsung has virtual users choosing \na session based on a probability with think times. I know many \nprogramming languages; tsung (and its error messages) is in erlang.\n\n> If you want to replay your logs at the current production speed and\n> concurrency, see Playr.\n> https://area51.myyearbook.com/trac.cgi/wiki/Playr\n\nThanks for this tip. It seems worth a look.\n\nRegards,\nKen\n", "msg_date": "Tue, 02 Jun 2009 09:02:51 -0400", "msg_from": "\"Kenneth Cox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "Hi Peter,\n\nI was looking for the same recently, and my answer is as follows:\n\n1. If you want to test the *H/W and configuration of your DBMS* then you can\nuse the pgbench tool (which uses a specific built-in DB+schema, following\nthe TPC benchmark).\n\n2. If you want to *load test your own specific DB* then I am unaware of any\nsuch tools. I ended up using JMeter with the JDBC connector for\nPostgresql<http://jakarta.apache.org/jmeter/usermanual/build-db-test-plan.html>.\nIt took me a while to get it configured and running, but I now think JMeter\nis excellent. I suggest you use JMeter 2.3.2, as I upgraded to 2.3.3 and it\nseems to have a bug with JDBC connection to Postgres.\n\n-- Shaul\n\nOn Mon, Jun 1, 2009 at 6:55 PM, Peter Sheats <[email protected]> wrote:\n\n> Hi,\n>\n> I’m about to set up a large instance on Amazon EC2 to be our DB server.\n>\n> Before we switch to using it in production I would like to simulate some\n> load on it so that I know what it can handle and so that I can make sure I\n> have the optimal settings in the config file.\n>\n> What is the best strategy out there for doing this? Does anyone know of\n> some resource that talks about doing this?\n>\n> Thanks,\n>\n> Peter\n>\n\nHi Peter,I was looking for the same recently, and my answer is as follows:1. If you want to test the H/W and configuration of your DBMS then you can use the pgbench tool (which uses a specific built-in DB+schema, following the TPC benchmark).\n2. If you want to load test your own specific DB then I am unaware of any such tools. I ended up using JMeter with the JDBC connector for Postgresql. It took me a while to get it configured and running, but I now think JMeter is excellent. I suggest you use JMeter 2.3.2, as I upgraded to 2.3.3 and it seems to have a bug with JDBC connection to Postgres.\n-- Shaul\nOn Mon, Jun 1, 2009 at 6:55 PM, Peter Sheats <[email protected]> wrote:\n\nHi,\n\nI’m about to set up a large instance on Amazon EC2 to be our DB server.\n\nBefore we switch to using it in production I would like to simulate some load on it so that I know what it can handle and so that I can make sure I have the optimal settings in the config file.\n\nWhat is the best strategy out there for doing this?  Does anyone know of some resource that talks about doing this?\n\nThanks,\n\nPeter", "msg_date": "Tue, 2 Jun 2009 16:31:03 +0300", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "\"Kenneth Cox\" <[email protected]> writes:\n> On Tue, 02 Jun 2009 05:26:41 -0400, Dimitri Fontaine\n> <[email protected]> wrote:\n>> I'd recommand having a look at tsung which will be able to replay a\n>> typical application scenario with as many concurrent users as you want\n>> to: http://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php\n>> http://tsung.erlang-projects.org/\n>> http://pgfouine.projects.postgresql.org/tsung.html\n>\n> I am having a look at tsung and not getting very far yet. Have you had luck\n> with it and do you really mean as many concurrent users as you want?\n\nLast time I used it it was in the context of a web application and to\ncompare PostgreSQL against Informix after a migration. So I used the\nHTTP protocol support of the injector.\n\nTsung is based on erlang and can be run from more than one node at any\ntime, last time I checked you could run 600 to 800 concurrent clients\nfrom each node. Recent versions of erlang allow a much greater number\nper node, one or two orders of magnitude greater, as I've been told by\nTsung's main developer.\n\n> I was\n> hoping to use it to simulate my current load while tuning and making\n> improvements. So far tsung doesn't appear well suited to my needs. I use\n> persistent connections; each tsung session uses a new connection. I have\n> multiple applications that have very usage patterns (some web and largely\n> idle, some non web and almost saturated); tsung has virtual users choosing\n> a session based on a probability with think times. I know many programming\n> languages; tsung (and its error messages) is in erlang.\n\nTsung can be setup as an http or postgresql proxy: in this mode it'll\nprepare session files for you while you use your application as\nusual. The thinktime it sees will then get randomized at run time to\nbetter reflect real usage.\n\nYou can define several user arrival phases to see what happens when the\nload raises then get back to normal traffic. Lots of options, really.\n\nTsung generates statistics and comes with tools to analyze them and\nprovide graphs organized into a web page, one of those tools allow to\ndraw graphs from different simulations onto the same chart, with the\nsame scaling, in order to easily compare results.\n\nIt seems to me tsung is a good tool for your use case.\n\nRegards,\n-- \ndim\n", "msg_date": "Wed, 03 Jun 2009 11:29:02 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "I considered Tsung myself but haven't tried it. If you intend to, I suggest\nyou read this excellent tutorial on using Tsung for test-loading\nPostgresql<http://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php>.\nWhile impressed I decided the procedure was too daunting and went with\nJMeter :-) It too can run test from multiple clients and has built in tables\nand graphs and you can save results as CSV or XML etc. In particular I\nrecommend adding the extenion \"listener\" (JMeter term for anything that\ncaptures and portrays test results) called Statitical Aggregate Report.\n\nMay the force be with you,\n\n-- Shaul\n\nOn Wed, Jun 3, 2009 at 12:29 PM, Dimitri Fontaine <[email protected]>wrote:\n\n> \"Kenneth Cox\" <[email protected]> writes:\n> > On Tue, 02 Jun 2009 05:26:41 -0400, Dimitri Fontaine\n> > <[email protected]> wrote:\n> >> I'd recommand having a look at tsung which will be able to replay a\n> >> typical application scenario with as many concurrent users as you want\n> >> to: http://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php\n> >> http://tsung.erlang-projects.org/\n> >> http://pgfouine.projects.postgresql.org/tsung.html\n> >\n> > I am having a look at tsung and not getting very far yet. Have you had\n> luck\n> > with it and do you really mean as many concurrent users as you want?\n>\n> Last time I used it it was in the context of a web application and to\n> compare PostgreSQL against Informix after a migration. So I used the\n> HTTP protocol support of the injector.\n>\n> Tsung is based on erlang and can be run from more than one node at any\n> time, last time I checked you could run 600 to 800 concurrent clients\n> from each node. Recent versions of erlang allow a much greater number\n> per node, one or two orders of magnitude greater, as I've been told by\n> Tsung's main developer.\n>\n> > I was\n> > hoping to use it to simulate my current load while tuning and making\n> > improvements. So far tsung doesn't appear well suited to my needs. I\n> use\n> > persistent connections; each tsung session uses a new connection. I have\n> > multiple applications that have very usage patterns (some web and largely\n> > idle, some non web and almost saturated); tsung has virtual users\n> choosing\n> > a session based on a probability with think times. I know many\n> programming\n> > languages; tsung (and its error messages) is in erlang.\n>\n> Tsung can be setup as an http or postgresql proxy: in this mode it'll\n> prepare session files for you while you use your application as\n> usual. The thinktime it sees will then get randomized at run time to\n> better reflect real usage.\n>\n> You can define several user arrival phases to see what happens when the\n> load raises then get back to normal traffic. Lots of options, really.\n>\n> Tsung generates statistics and comes with tools to analyze them and\n> provide graphs organized into a web page, one of those tools allow to\n> draw graphs from different simulations onto the same chart, with the\n> same scaling, in order to easily compare results.\n>\n> It seems to me tsung is a good tool for your use case.\n>\n> Regards,\n> --\n> dim\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI considered Tsung myself but haven't tried it. If you intend to, I suggest you read this excellent tutorial on using Tsung for test-loading Postgresql.\nWhile impressed I decided the procedure was too daunting and went with\nJMeter :-) It too can run test from multiple clients and has built in\ntables and graphs and you can save results as CSV or XML etc. In\nparticular I recommend adding the extenion \"listener\" (JMeter term for\nanything that captures and portrays test results) called Statitical\nAggregate Report.\nMay the force be with you,-- ShaulOn Wed, Jun 3, 2009 at 12:29 PM, Dimitri Fontaine <[email protected]> wrote:\n\"Kenneth Cox\" <[email protected]> writes:\n\n> On Tue, 02 Jun 2009 05:26:41 -0400, Dimitri Fontaine\n> <[email protected]> wrote:\n>> I'd recommand having a look at tsung which will be able to replay a\n>> typical application scenario with as many concurrent users as you want\n>> to: http://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php\n>>   http://tsung.erlang-projects.org/\n>>   http://pgfouine.projects.postgresql.org/tsung.html\n>\n> I am having a look at tsung and not getting very far yet.  Have you had luck\n> with it and do you really mean as many concurrent users as you want?\n\nLast time I used it it was in the context of a web application and to\ncompare PostgreSQL against Informix after a migration. So I used the\nHTTP protocol support of the injector.\n\nTsung is based on erlang and can be run from more than one node at any\ntime, last time I checked you could run 600 to 800 concurrent clients\nfrom each node. Recent versions of erlang allow a much greater number\nper node, one or two orders of magnitude greater, as I've been told by\nTsung's main developer.\n\n>   I was\n> hoping to use it to simulate my current load while tuning and making\n> improvements.  So far tsung doesn't appear well suited to my needs.  I use\n> persistent connections; each tsung session uses a new connection.  I have\n> multiple applications that have very usage patterns (some web and largely\n> idle, some non web and almost saturated); tsung has virtual users choosing\n> a session based on a probability with think times.  I know many  programming\n> languages; tsung (and its error messages) is in erlang.\n\nTsung can be setup as an http or postgresql proxy: in this mode it'll\nprepare session files for you while you use your application as\nusual. The thinktime it sees will then get randomized at run time to\nbetter reflect real usage.\n\nYou can define several user arrival phases to see what happens when the\nload raises then get back to normal traffic. Lots of options, really.\n\nTsung generates statistics and comes with tools to analyze them and\nprovide graphs organized into a web page, one of those tools allow to\ndraw graphs from different simulations onto the same chart, with the\nsame scaling, in order to easily compare results.\n\nIt seems to me tsung is a good tool for your use case.\n\nRegards,\n--\ndim\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 3 Jun 2009 14:11:16 +0300", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "On Wed, 03 Jun 2009 05:29:02 -0400, Dimitri Fontaine \n<[email protected]> wrote:\n> Last time I used it it was in the context of a web application and to\n> compare PostgreSQL against Informix after a migration. So I used the\n> HTTP protocol support of the injector.\n\nTsung seems well suited for that.\n\n> Tsung is based on erlang...you could run 600 to 800 concurrent clients\n> from each node.\n\nBut each tsung session (virtual user) uses a separate PG connection, and I \nneed 30k virtual users. I can't imagine 30k PG connections. I could \nimagine using pgbouncer in statement pooling mode, but that doesn't \ncharacterize my load well, where different PG connections have different \nprofiles. I have about 500 connections:\n\n ~450 from web servers, often idle, various work loads, no prepared \nstatements\n 50 from another client, mostly idle, small set of prepared statements\n 10 from another client, extremely active, small set of prepared \nstatements\n\nI know a tsung session doesn't have to exactly mimic a user and I tried to \ncoerce a tsung session to represent instead a DB client, with loops and \nmultiple CSV files. I wasn't so successful there, and was nagged by the \nassignment of sessions by probability, when I wanted a fixed number \nrunning each session.\n\nI do appreciate the suggestions, and I agree Tsung has lots of nifty \nfeatures. I used pgfouine to generate tsung sessions I love the graph \ngeneration but for me it comes down to simulating my DB load so that I can \nprofile and tune the DB. I am not seeing how to get tsung to fit my case.\n\nNext up I will try JMeter (thanks Shaul Dar for the suggestions).\n\nRegards,\nKen\n", "msg_date": "Wed, 03 Jun 2009 09:09:15 -0400", "msg_from": "\"Kenneth Cox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "On Tue, 2 Jun 2009, Shaul Dar wrote:\n\n> If you want to test the H/W and configuration of your DBMS then you can \n> use the pgbench tool (which uses a specific built-in DB+schema, \n> following the TPC benchmark).\n\nThere are a lot of TPC benchmarks. pgbench simulates TPC-B (badly), which \nis a benchmark from 1990. It's not at all representative of the current \nTPC benchmarks.\n\n> If you want to load test your own specific DB then I am unaware of any \n> such tools.\n\npgbench will run against any schema and queries, the built-in set are just \nthe easiest to use. I just released a bunch of slides and a package I \nnamed pgbench-tools that show some of the possibilities here, links to \neverything are at: \nhttp://notemagnet.blogspot.com/2009/05/bottom-up-postgresql-benchmarking-and.html\n\nI'd mentioned working on that this before on this list but the code just \ngot stable enough to release recently. Anybody who is running lots of \npgbench tests at different database sizes and client loads might benefit \nfrom using my toolset to automate running the tests and reporting on the \nresults.\n\nThe last few slides of my pgbench presentation show how you might write a \ncustom test that measures how fast rows of various sizes can be inserted \ninto your database at various client counts.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 3 Jun 2009 19:01:32 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "On Thu, Jun 4, 2009 at 2:01 AM, Greg Smith <[email protected]> wrote:\n\n>\n> If you want to load test your own specific DB then I am unaware of any\n>> such tools.\n>>\n>\n> pgbench will run against any schema and queries, the built-in set are just\n> the easiest to use. I just released a bunch of slides and a package I named\n> pgbench-tools that show some of the possibilities here, links to everything\n> are at:\n> http://notemagnet.blogspot.com/2009/05/bottom-up-postgresql-benchmarking-and.html\n>\n\nׂGreg,\n\nHave you actually run pgbench against your own schema? Can you point me to\nan example? I also had the same impression reading the documentation. But\nwhen I tried it with the proper flags to use my own DB and query file I got\nan error that it couldn't find one of the tables mentioned in the built-in\ntest! I concluded that I cannot use any schema, I could only supply my own\nDB but with the same set of tables pgbench expects. Maybe I missed something\nor made a mistake?\n\nThanks,\n\n-- Shaul\n\nOn Thu, Jun 4, 2009 at 2:01 AM, Greg Smith <[email protected]> wrote:\n\n\nIf you want to load test your own specific DB then I am unaware of any such tools.\n\n\npgbench will run against any schema and queries, the built-in set are just the easiest to use.  I just released a bunch of slides and a package I named pgbench-tools that show some of the possibilities here, links to everything are at: http://notemagnet.blogspot.com/2009/05/bottom-up-postgresql-benchmarking-and.html\nׂGreg,Have you actually run pgbench against your own schema? Can you point me to an example? I also had the same impression reading the documentation. But when I tried it with the proper flags to use my own DB and query file I got an error that it couldn't find one of the tables mentioned in the built-in test! I concluded that I cannot use any schema, I could only supply my own DB but with the same set of tables pgbench expects. Maybe I missed something or made a mistake?\nThanks,-- Shaul", "msg_date": "Tue, 9 Jun 2009 15:50:10 +0300", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "Shaul Dar <[email protected]> writes:\n> Have you actually run pgbench against your own schema? Can you point me to\n> an example? I also had the same impression reading the documentation. But\n> when I tried it with the proper flags to use my own DB and query file I got\n> an error that it couldn't find one of the tables mentioned in the built-in\n> test! I concluded that I cannot use any schema,\n\nNo, you just need to read the documentation. There's a switch that\nprevents the default action of trying to vacuum the \"standard\" tables.\nI think -N, but too lazy to look ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Jun 2009 09:53:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server " }, { "msg_contents": "Technically you can then use pgbench on that set of statements, but I\nusually just use perl's \"Benchmark\" module.... (i'm sure ruby or java\nor whatever has a similar tool)\n\n(First, I log statements by loading the application or web server with\nstatement logging turned on.... so I'm not \"guessing\" what sql will be\ncalled. Usually doing this exposes a flotilla of inefficencies in\nthe code ....)\n\n\nOn Tue, Jun 9, 2009 at 9:53 AM, Tom Lane<[email protected]> wrote:\n> Shaul Dar <[email protected]> writes:\n>> Have you actually run pgbench against your own schema? Can you point me to\n>> an example? I also had the same impression reading the documentation. But\n>> when I tried it with the proper flags to use my own DB and query file I got\n>> an error that it couldn't find one of the tables mentioned in the built-in\n>> test! I concluded that I cannot use any schema,\n>\n> No, you just need to read the documentation.  There's a switch that\n> prevents the default action of trying to vacuum the \"standard\" tables.\n> I think -N, but too lazy to look ...\n>\n>                        regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 11 Jun 2009 17:29:57 -0400", "msg_from": "Erik Aronesty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to load test a postgresql server" }, { "msg_contents": "Suppose I have a large table with a small-cardinality CATEGORY column (say, categories 1..5). I need to sort by an arbitrary (i.e. user-specified) mapping of CATEGORY, something like this:\n\n 1 => 'z'\n 2 => 'a'\n 3 => 'b'\n 4 => 'w'\n 5 => 'h'\n\nSo when I get done, the sort order should be 2,3,5,4,1.\n\nI could create a temporary table with the category-to-key mapping, but is there any way to do this in a single SQL statement?\n\nThanks,\nCraig\n", "msg_date": "Thu, 09 Jul 2009 09:26:42 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Sorting by an arbitrary criterion" }, { "msg_contents": "On Thu, Jul 9, 2009 at 5:26 PM, Craig James<[email protected]> wrote:\n> Suppose I have a large table with a small-cardinality CATEGORY column (say,\n> categories 1..5).  I need to sort by an arbitrary (i.e. user-specified)\n> mapping of CATEGORY, something like this:\n>\n>  1 => 'z'\n>  2 => 'a'\n>  3 => 'b'\n>  4 => 'w'\n>  5 => 'h'\n>\n> So when I get done, the sort order should be 2,3,5,4,1.\n>\n> I could create a temporary table with the category-to-key mapping, but is\n> there any way to do this in a single SQL statement?\n>\n\nyou can create translation table, join it, and sort by its key.\n\n\n-- \nGJ\n", "msg_date": "Thu, 9 Jul 2009 17:35:01 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting by an arbitrary criterion" }, { "msg_contents": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n> On Thu, Jul 9, 2009 at 5:26 PM, Craig James<[email protected]> wrote:\n>> Suppose I have a large table with a small-cardinality CATEGORY column (say,\n>> categories 1..5).  I need to sort by an arbitrary (i.e. user-specified)\n>> mapping of CATEGORY, something like this:\n\n> you can create translation table, join it, and sort by its key.\n\nMuch easier to\n\tORDER BY CASE category WHEN 'z' THEN 1 WHEN 'a' THEN 2 ... END\n\nActually, consider putting the CASE into a function and doing\n\tORDER BY sort_order(category)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Jul 2009 12:38:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting by an arbitrary criterion " }, { "msg_contents": "Craig James <[email protected]> wrote: \n> Suppose I have a large table with a small-cardinality CATEGORY\n> column (say, categories 1..5). I need to sort by an arbitrary\n> (i.e. user-specified) mapping of CATEGORY\n \nThere was a recent thread discussing ways to do that:\n \nhttp://archives.postgresql.org/pgsql-admin/2009-07/msg00016.php\n \n-Kevin\n", "msg_date": "Thu, 09 Jul 2009 11:39:04 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting by an arbitrary criterion" }, { "msg_contents": "On Thu, Jul 9, 2009 at 6:26 PM, Craig James<[email protected]> wrote:\n> Suppose I have a large table with a small-cardinality CATEGORY column (say,\n> categories 1..5).  I need to sort by an arbitrary (i.e. user-specified)\n> mapping of CATEGORY, something like this:\n>\n>  1 => 'z'\n>  2 => 'a'\n>  3 => 'b'\n>  4 => 'w'\n>  5 => 'h'\n>\n> So when I get done, the sort order should be 2,3,5,4,1.\n\nIf the object is to avoid a separate table, you can do it with a\n\"case\" statement:\n\n select ... from ...\n order by case category\n when 1 then 'z'\n when 2 then 'a'\n when 3 then 'b'\n when 4 then 'w'\n when 5 then 'h'\n end\n\nIf you this sounds slow, you're right. But it might perform well\nenough for your use case.\n\nA.\n", "msg_date": "Thu, 9 Jul 2009 18:39:15 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting by an arbitrary criterion" }, { "msg_contents": "On Thu, Jul 09, 2009 at 09:26:42AM -0700, Craig James wrote:\n> Suppose I have a large table with a small-cardinality CATEGORY column (say, categories 1..5). I need to sort by an arbitrary (i.e. user-specified) mapping of CATEGORY, something like this:\n>\n> 1 => 'z'\n> 2 => 'a'\n> 3 => 'b'\n> 4 => 'w'\n> 5 => 'h'\n> So when I get done, the sort order should be 2,3,5,4,1.\n> I could create a temporary table with the category-to-key mapping, but is there any way to do this in a single SQL statement?\n\nYou can do it like this:\n\nselect c.*\nfrom categories c, ( values (1, 'z'), (2, 'a'), (3, 'b'), (4, 'w'), (5, 'h') ) as o (id, ordering) on c.id = o.id\norder by o.ordering\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Thu, 9 Jul 2009 20:11:09 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting by an arbitrary criterion" }, { "msg_contents": "> On Thu, Jul 09, 2009 at 09:26:42AM -0700, Craig James wrote:\n> You can do it like this:\n> select c.*\n> from categories c, ( values (1, 'z'), (2, 'a'), (3, 'b'), (4, 'w'),\n(5,\n> 'h') ) as o (id, ordering) on c.id = o.id\n> order by o.ordering\n\nAnother option would be:\n\nselect c.*\nfrom categories c\norder by case(c.category) when 1 then 'z' when 2 then 'a' then 3 then\n'b' when 4 then 'w' when 5 then 'h' end;\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n\n", "msg_date": "Thu, 9 Jul 2009 14:13:52 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting by an arbitrary criterion" }, { "msg_contents": "2009/7/9 Tom Lane <[email protected]>:\n> =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n>> On Thu, Jul 9, 2009 at 5:26 PM, Craig James<[email protected]> wrote:\n>>> Suppose I have a large table with a small-cardinality CATEGORY column (say,\n>>> categories 1..5).  I need to sort by an arbitrary (i.e. user-specified)\n>>> mapping of CATEGORY, something like this:\n>\n>> you can create translation table, join it, and sort by its key.\n>\n> Much easier to\n>        ORDER BY CASE category WHEN 'z' THEN 1 WHEN 'a' THEN 2 ... END\n>\n> Actually, consider putting the CASE into a function and doing\n>        ORDER BY sort_order(category)\n\nI suppose table is handy, when you have a lot of items as keys...\n\n\n\n-- \nGJ\n", "msg_date": "Thu, 9 Jul 2009 21:25:21 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting by an arbitrary criterion" } ]
[ { "msg_contents": "Hi all,\n\nI've been staring at this for hours (if not days). Any hints are appreciated! At first I thought \"there must be a way to make postgresql perform on this thing\", but i've lost hope that pgsql actually can deal with it..\n\nThe query is:\n\nSELECT DISTINCT\n posrel.pos,\n questions.number,\n questions.title,\n answers.number, \n answers.title,\n answers.body \nFROM \n bar_portals portals, \n ONLY bar_insrel insrel,\n bar_faq faq, \n ONLY bar_posrel posrel, \n bar_questions questions,\n bar_insrel insrel0,\n bar_answers answers \nWHERE \n portals.number=2534202 \n AND \n (portals.number=insrel.snumber AND faq.number=insrel.dnumber) \n AND \n (faq.number=posrel.snumber AND questions.number=posrel.dnumber AND posrel.rnumber=780) \n AND \n (\n (questions.number=insrel0.dnumber AND answers.number=insrel0.snumber AND insrel0.dir<>1) \n OR \n (questions.number=insrel0.snumber AND answers.number=insrel0.dnumber)\n ) \n AND \n (\n portals.owner NOT IN ('notinuse','productie') \n AND insrel.owner NOT IN ('notinuse','productie') \n AND faq.owner NOT IN ('notinuse','productie') \n AND posrel.owner NOT IN ('notinuse','productie') \n AND questions.owner NOT IN ('notinuse','productie') \n AND insrel0.owner NOT IN ('notinuse','productie') \n AND answers.owner NOT IN ('notinuse','productie')\n ) \nORDER BY posrel.pos ASC;\n\nA whole mouth full. Basically, we have portals, which are linked through insrel with faq's, which are linked through posrel to questions and finally these questions are linked to answers by insrel again.\n\nInsrel is a horrible table that links together every object in this system, not just faq's and such. Posrel is similar (actually contains a subset of insrel).\n\nNow, the explain analyze output:\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=784671.21..784674.86 rows=209 width=252) (actual time=46493.975..46494.052 rows=19 loops=1)\n -> Sort (cost=784671.21..784671.73 rows=209 width=252) (actual time=46493.969..46493.987 rows=19 loops=1)\n Sort Key: posrel.pos, questions.number, questions.title, answers.number, answers.title, answers.body\n -> Nested Loop (cost=1.74..784663.15 rows=209 width=252) (actual time=3467.099..46493.621 rows=19 loops=1)\n Join Filter: (((questions.number = insrel0.dnumber) AND (answers.number = insrel0.snumber) AND (insrel0.dir <> 1)) OR ((questions.number = insrel0.snumber) AND (answers.number = insrel0.dnumber)))\n -> Nested Loop (cost=0.00..6769.08 rows=181978 width=252) (actual time=1.962..6548.477 rows=1732040 loops=1)\n -> Nested Loop (cost=0.00..282.30 rows=2 width=33) (actual time=1.287..7.785 rows=19 loops=1)\n -> Nested Loop (cost=0.00..277.16 rows=18 width=8) (actual time=1.030..5.504 rows=19 loops=1)\n -> Nested Loop (cost=0.00..182.63 rows=1 width=8) (actual time=0.921..4.445 rows=1 loops=1)\n -> Nested Loop (cost=0.00..147.05 rows=64 width=4) (actual time=0.060..3.222 rows=269 loops=1)\n -> Seq Scan on foo_portals portals (cost=0.00..1.27 rows=1 width=4) (actual time=0.017..0.027 rows=1 loops=1)\n Filter: ((number = 2534202) AND (\"owner\" <> ALL ('{notinuse,productie}'::text[])))\n -> Index Scan using foo_insrel_snumber_idx on foo_insrel insrel (cost=0.00..145.14 rows=64 width=8) (actual time=0.038..2.673 rows=269 loops=1)\n Index Cond: (snumber = 2534202)\n Filter: (\"owner\" <> ALL ('{notinuse,productie}'::text[]))\n -> Index Scan using foo_faq_main_idx on foo_faq faq (cost=0.00..0.54 rows=1 width=4) (actual time=0.003..0.003 rows=0 loops=269)\n Index Cond: (faq.number = insrel.dnumber)\n Filter: (\"owner\" <> ALL ('{notinuse,productie}'::text[]))\n -> Index Scan using foo_posrel_snumber_owner_idx on foo_posrel posrel (cost=0.00..93.97 rows=45 width=12) (actual time=0.105..0.984 rows=19 loops=1)\n Index Cond: (faq.number = posrel.snumber)\n Filter: (rnumber = 780)\n -> Index Scan using foo_questions_main_idx on foo_questions questions (cost=0.00..0.27 rows=1 width=29) (actual time=0.106..0.112 rows=1 loops=19)\n Index Cond: (questions.number = posrel.dnumber)\n Filter: (\"owner\" <> ALL ('{notinuse,productie}'::text[]))\n -> Seq Scan on foo_answers answers (cost=0.00..2333.50 rows=90989 width=219) (actual time=0.061..162.481 rows=91160 loops=19)\n Filter: (\"owner\" <> ALL ('{notinuse,productie}'::text[]))\n -> Bitmap Heap Scan on foo_insrel insrel0 (cost=1.74..4.25 rows=1 width=12) (actual time=0.020..0.020 rows=0 loops=1732040)\n Recheck Cond: (((questions.number = insrel0.dnumber) AND (answers.number = insrel0.snumber)) OR ((answers.number = insrel0.dnumber) AND (questions.number = insrel0.snumber)))\n Filter: (\"owner\" <> ALL ('{notinuse,productie}'::text[]))\n -> BitmapOr (cost=1.74..1.74 rows=1 width=0) (actual time=0.018..0.018 rows=0 loops=1732040)\n -> Bitmap Index Scan on foo_inresl_dnumber_snumber_idx (cost=0.00..0.87 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=1732040)\n Index Cond: ((questions.number = insrel0.dnumber) AND (answers.number = insrel0.snumber))\n -> Bitmap Index Scan on foo_inresl_dnumber_snumber_idx (cost=0.00..0.87 rows=1 width=0) (actual time=0.008..0.008 rows=0 loops=1732040)\n Index Cond: ((answers.number = insrel0.dnumber) AND (questions.number = insrel0.snumber))\n Total runtime: 46494.271 ms\n(35 rows)\n\nAs you can see, 46 seconds. Not quite impressive.\n\nNow, when I split up the OR in two distinct queries, everything is nice and fast. Both queries run in sub-second time.\n\nAnyway, back to the query plan. The plan nicely starts with portals, insrel, faq, posrel and questions. But then it proposes something totally stupid, it seems it does a cartesian product of those 19-odd questions it got so far and the 91000 answers (the seq scan on foo_answers), and then goes to check for each of the pairs in that big set whether the disjunction holds.\n\nWhy would it do this? And more importantly: how do i get it to do it in a smarter way; eg. by retrieving for each of the questions the corresponding answers by traversing the insrel disjunction using indices etc..\n\nMy guess (and i'd like to know whether i'm 'warm' or 'cold' here) is that insrel is such a messed up table, that postgres fails to extract representative statistics from a small sample. The most important correlation is probably in snumber/dnumber, but the characteristics of this correlation vary wildly: there might be one snumber with a million dnumbers and a million snumbers with only one dnumber.\n\nI've tried bigger statistics targets on the insrel dnumber/snumber columns, even maxed it out to 1000, but that did not help.\n\nAnyway, any hints on getting this beast perform (without rewriting the query, that's not possible in this case due to the query being generated by an ORM) are welcome. I'm starting to think it is impossible, and that postgresql just doesn't work for this particular query+dataset.\n\n\nThanks,\n\nKoen\n", "msg_date": "Mon, 1 Jun 2009 18:02:09 +0200", "msg_from": "Koen Martens <[email protected]>", "msg_from_op": true, "msg_subject": "Very inefficient query plan with disjunction in WHERE clause" }, { "msg_contents": "2009/6/1 Koen Martens <[email protected]>\n\n>\n> Now, when I split up the OR in two distinct queries, everything is nice and\n> fast. Both queries run in sub-second time.\n\n\nHi.\n\nPostgreSQL simply do not like ORs (can't use indexes in this case), so\nUNION/UNION ALL is your friend.\n\nBest regards, Vitalii Tymchyshyn\n\n2009/6/1 Koen Martens <[email protected]>\n\nNow, when I split up the OR in two distinct queries, everything is nice and fast. Both queries run in sub-second time.Hi.PostgreSQL simply do not like ORs (can't use indexes in this case), so UNION/UNION ALL is your friend.\nBest regards, Vitalii Tymchyshyn", "msg_date": "Mon, 1 Jun 2009 19:19:12 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very inefficient query plan with disjunction in WHERE\n\tclause" }, { "msg_contents": "On Mon, 1 Jun 2009, Koen Martens wrote:\n> Anyway, any hints on getting this beast perform (without rewriting the \n> query, that's not possible in this case due to the query being generated \n> by an ORM) are welcome. I'm starting to think it is impossible, and that \n> postgresql just doesn't work for this particular query+dataset.\n\nYeah, being bound by the ORM can be annoying. What version of Postgres is \nthis? Recent versions can sometimes do a bitmap index scan to satisfy an \nOR constraint.\n\nMatthew\n\n-- \n Anyone who goes to a psychiatrist ought to have his head examined.\n\n", "msg_date": "Mon, 1 Jun 2009 17:37:32 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very inefficient query plan with disjunction in WHERE\n clause" }, { "msg_contents": "On Mon, 1 Jun 2009, Koen Martens wrote:\n> Anyway, any hints on getting this beast perform (without rewriting the query, that's not possible in this case due to\n> the query being generated by an ORM) are welcome. I'm starting to think it is impossible, and that postgresql just\n> doesn't work for this particular query+dataset.\n\nYeah, being bound by the ORM can be annoying. What version of Postgres is \nthis? Recent versions can sometimes do a bitmap index scan to satisfy an \nOR constraint.\n\nMatthew\n\n-- \n I work for an investment bank. I have dealt with code written by stock\n exchanges. I have seen how the computer systems that store your money are\n run. If I ever make a fortune, I will store it in gold bullion under my\n bed. -- Matthew Crosby\n", "msg_date": "Tue, 2 Jun 2009 13:47:39 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very inefficient query plan with disjunction in WHERE\n clause" } ]
[ { "msg_contents": "Hi all --\n\nIn attempting to perform a particular query in postgres, I ran into\nwhat appeared to be a pretty severe performance bottleneck.  I didn't\nknow whether I'd constructed my query stupidly, or I'd misconfigured\npostgres in some suboptimal way, or what else might be going on.\nThough I very much wanted to know the underlying cause of this\nperformance issue on my query, I was also facing the constraint that I\nwas working with a system in production and the tables my query\noperated on could not be taken offline.  So I did the sensible thing\nand captured a snapshot of the current production system via simple\npg_dump and played it back into a different postgres installation.\nThe snapshot copy was installed onto a \"development system\" with a\nPentium D class processor running postgres 8.1 on CentOS 4 32-bit\nwhereas the production copy was on a \"production system\" with a Core 2\nDuo class processor running postgres 8.3 on OpenSUSE 10.2 64-bit.\n\nI resubmitted a modified version of the query that'd given me problems\nbefore, this time giving the same query to both systems at the same\ntime.  To my surprise, the query finished successfully on the\ndevelopment system in under a minute.  I let the query continue to run\non the production system while I verified the results from the query\non the development system -- everything checked out.  It took more\nthan a day and a half for the query to complete on the production\nsystem.  Granted, the production system was continuing to see inserts\nto the relevant table while my query was running and granted that the\nsize of the production system's table had grown since the snapshot\n(but not more than 50% larger), but I found it very difficult to\naccept these conditions explaining such a massive difference in\nperformance of my query.  Here's the query and the response to\nconsole, including \\timing info, as submitted on the development\nsystem:\nsmirkfp=# insert into dupids ( id ) select id from content where id\nnot in (select min(id) from content group by hash);\nINSERT 0 437391\nTime: 25733.394 ms\n\nHere is the same query with the very different timing results seen on\nthe production system:\nsmirkfp=# insert into dupids ( id ) select id from content where id\nnot in (select min(id) from content group by hash);\nINSERT 0 441592\nTime: 142622702.430 ms\n\nA little more background:  The table of interest, content, has around\n1.5M rows on the production system and around 1.1M rows on the\ndevelopment system at the time this query was run.  On both systems,\nthe smirkfp databases are centered around this table, content, and\nhave no other large tables or activities of interest outside this\ntable.  The database is sufficiently new that no time had been taken\nto vacuum or analyze anything previously.  Neither the development nor\nproduction system had noteworthy processor load or disk activity\noutside postgres.  On the production system, the above long-running\nquery was pegging one of the processor cores at ~100% for almost the\nwhole time of the query's execution.\n\nOn the development system, I asked postgres to show me the execution\nplan for my query:\nsmirkfp=# explain insert into dupids ( id ) select id from content\nwhere id not in (select min(id) from content group by hash);\n                                  QUERY PLAN\n---------------------------------------------------------------------------------\n Seq Scan on content  (cost=204439.55..406047.33 rows=565752 width=4)\n  Filter: (NOT (hashed subplan))\n  SubPlan\n    ->  HashAggregate  (cost=204436.55..204439.05 rows=200 width=36)\n          ->  Seq Scan on content  (cost=0.00..198779.03 rows=1131503 width=36)\n(5 rows)\n\n\nFor comparison, I asked the same thing of the production system:\nsmirkfp=# explain insert into dupids ( id ) select id from content\nwhere id not in (select min(id) from content group by hash);\n                                        QUERY PLAN\n---------------------------------------------------------------------------------------------\n Seq Scan on content  (cost=493401.85..9980416861.63 rows=760071 width=4)\n  Filter: (NOT (subplan))\n  SubPlan\n    ->  Materialize  (cost=493401.85..504915.85 rows=646400 width=37)\n          ->  GroupAggregate  (cost=468224.39..487705.45 rows=646400 width=37)\n                ->  Sort  (cost=468224.39..472024.74 rows=1520142 width=37)\n                      Sort Key: public.content.hash\n                      ->  Seq Scan on content  (cost=0.00..187429.42\nrows=1520142 width=37)\n(8 rows)\n\n\nLooks pretty different.  Next, I thought I'd try asking the\ndevelopment system to analyze the table, content, and see if that\nchanged anything:\nsmirkfp=# analyze content;\nANALYZE\nsmirkfp=# explain insert into dupids ( id ) select id from content\nwhere id not in (select min(id) from content group by hash);\n                                        QUERY PLAN\n---------------------------------------------------------------------------------------------\n Seq Scan on content  (cost=480291.35..7955136280.55 rows=582888 width=4)\n  Filter: (NOT (subplan))\n  SubPlan\n    ->  Materialize  (cost=480291.35..492297.85 rows=656050 width=40)\n          ->  GroupAggregate  (cost=457245.36..474189.30 rows=656050 width=40)\n                ->  Sort  (cost=457245.36..460159.80 rows=1165776 width=40)\n                      Sort Key: hash\n                      ->  Seq Scan on content  (cost=0.00..199121.76\nrows=1165776 width=40)\n(8 rows)\n\n\nThat looks a bit discouraging.  As a consequence of the analyze, it\nseems the development system has changed its execution plan for that\nquery to be just like the production system.  Let's drop the table,\ndupids, and re-create it, then re-run our query on the development\nsystem with its new execution plan:\nsmirkfp=# drop table dupids;\nDROP TABLE\nTime: 227.843 ms\nsmirkfp=# create table dupids ( id integer );\nCREATE TABLE\nTime: 30.980 ms\nsmirkfp=# insert into dupids ( id ) select id from content where id\nnot in (select min(id) from content group by hash);\nCancel request sent\nERROR:  canceling statement due to user request\nsmirkfp=#\n\nI grew impatient after letting the query run for an hour.  Without\nknowing precisely how long the query would've taken with this\nexecution plan, it seems clear enough that it is a suboptimal plan.\n\nHow to approach manipulating the execution plan back to something more\nefficient?  What characteristics of the table could have induced\nanalyze to suggest the much slower query plan?\n\n\nThanks in advance,\n\nDavin\n", "msg_date": "Wed, 3 Jun 2009 10:42:40 -0500", "msg_from": "Davin Potts <[email protected]>", "msg_from_op": true, "msg_subject": "poor performing plan from analyze vs. fast default plan pre-analyze\n\ton new database" }, { "msg_contents": "Postgresql isn't very efficient with subselects like that,\ntry:\nexplain select c.id from content c LEFT JOIN (select min(id) AS id\nfrom content group by hash) cg ON cg.id=c.id WHERE cg.id is null;\n", "msg_date": "Wed, 3 Jun 2009 17:07:37 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performing plan from analyze vs. fast default plan\n\tpre-analyze on new database" }, { "msg_contents": "Davin Potts <[email protected]> writes:\n> How to approach manipulating the execution plan back to something more\n> efficient? �What characteristics of the table could have induced\n> analyze to suggest the much slower query plan?\n\nWhat's evidently happening is that the planner is backing off from using\na hashed subplan because it thinks the hashtable will require more than\nwork_mem. Is 646400 a reasonably good estimate of the number of rows\nthat the sub-select will produce? If it's a large overestimate, then\nperhaps increasing the stats target for content.hash will help. If\nit's good, then what you want to do is increase work_mem to allow the\nplanner to use the better plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Jun 2009 12:27:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performing plan from analyze vs. fast default plan\n\tpre-analyze on new database" }, { "msg_contents": "\nOn 6/3/09 8:42 AM, \"Davin Potts\" <[email protected]> wrote:\n\n> Hi all --\n> \n> \n> A little more background:  The table of interest, content, has around\n> 1.5M rows on the production system and around 1.1M rows on the\n> development system at the time this query was run.  On both systems,\n> the smirkfp databases are centered around this table, content, and\n> have no other large tables or activities of interest outside this\n> table.  The database is sufficiently new that no time had been taken\n> to vacuum or analyze anything previously.  Neither the development nor\n> production system had noteworthy processor load or disk activity\n> outside postgres.  On the production system, the above long-running\n> query was pegging one of the processor cores at ~100% for almost the\n> whole time of the query's execution.\n> \n> On the development system, I asked postgres to show me the execution\n> plan for my query:\n> smirkfp=# explain insert into dupids ( id ) select id from content\n> where id not in (select min(id) from content group by hash);\n>                                   QUERY PLAN\n> ------------------------------------------------------------------------------\n> ---\n>  Seq Scan on content  (cost=204439.55..406047.33 rows=565752 width=4)\n>   Filter: (NOT (hashed subplan))\n>   SubPlan\n>     ->  HashAggregate  (cost=204436.55..204439.05 rows=200 width=36)\n>           ->  Seq Scan on content  (cost=0.00..198779.03 rows=1131503\n> width=36)\n> (5 rows)\n> \n> \n> For comparison, I asked the same thing of the production system:\n> smirkfp=# explain insert into dupids ( id ) select id from content\n> where id not in (select min(id) from content group by hash);\n>                                         QUERY PLAN\n> ------------------------------------------------------------------------------\n> ---------------\n>  Seq Scan on content  (cost=493401.85..9980416861.63 rows=760071 width=4)\n>   Filter: (NOT (subplan))\n>   SubPlan\n>     ->  Materialize  (cost=493401.85..504915.85 rows=646400 width=37)\n>           ->  GroupAggregate  (cost=468224.39..487705.45 rows=646400 width=37)\n>                 ->  Sort  (cost=468224.39..472024.74 rows=1520142 width=37)\n>                       Sort Key: public.content.hash\n>                       ->  Seq Scan on content  (cost=0.00..187429.42\n> rows=1520142 width=37)\n> (8 rows)\n> \n> \n> Looks pretty different.  Next, I thought I'd try asking the\n> development system to analyze the table, content, and see if that\n> changed anything:\n> smirkfp=# analyze content;\n> ANALYZE\n> smirkfp=# explain insert into dupids ( id ) select id from content\n> where id not in (select min(id) from content group by hash);\n>                                         QUERY PLAN\n> ------------------------------------------------------------------------------\n> ---------------\n>  Seq Scan on content  (cost=480291.35..7955136280.55 rows=582888 width=4)\n>   Filter: (NOT (subplan))\n>   SubPlan\n>     ->  Materialize  (cost=480291.35..492297.85 rows=656050 width=40)\n>           ->  GroupAggregate  (cost=457245.36..474189.30 rows=656050 width=40)\n>                 ->  Sort  (cost=457245.36..460159.80 rows=1165776 width=40)\n>                       Sort Key: hash\n>                       ->  Seq Scan on content  (cost=0.00..199121.76\n> rows=1165776 width=40)\n> (8 rows)\n> \n> How to approach manipulating the execution plan back to something more\n> efficient?  What characteristics of the table could have induced\n> analyze to suggest the much slower query plan?\n> \n\nWhen the table was analyzed, it found many more rows for the hash than the\ndefault assumption (distinct value estimate). If Postgres thinks the\nhash-aggregate plan won't fit in work_mem, it will go to a sort -> group\naggregate plan even if it estimates the sort plan to take thousands of times\nmore effort.\n\nSolutions:\n\n1. If this estimate is wrong, try increasing the statistics target on the\ncolumn(s) in question and re-analyzing. Or for a test change the global\ndefault_statistics_target and experiment. In the explain queries below, it\nappears as though this approach may have only a moderate affect. Is the\nestimate of 650,000 distinct values for the hash column accurate?\n\n2. Increase work_mem so that it can fit the hash in memory and use that\nplan and not have to sort the whole table. The below explain on your\nproduction db thinks it needs to hash into about ~650000 distinct buckets\nrows of width 37. That should fit in 32MB RAM or so.\nTry work_mem of 16MB, 32MB, and 64MB (and perhaps even 128M or larger on the\ntest box) and see if the explain changes.\n\n set work_mem ='32M';\nExplain <your query>;\n\nTo see what your current work_mem is do\n show work_mem;\n\nIf this approach works, you can either set this work_mem before running the\nquery, or globally change it. It is set low by default because if all your\nconnections are busy doing work that requires work_mem, you can end up using\nRAM at up to about (2 * work_mem * active connections).\n\nOn my larger db, the back-end aggregate processing connections use work_mem\n= 750M, which allows queries with 150M + rows to aggregate in minutes rather\nthan sort for half a day. However, we carefully cap the number of processes\nthat can concurrently run such queries.\n\n> \n> Thanks in advance,\n> \n> Davin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 3 Jun 2009 10:08:23 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performing plan from analyze vs. fast default\n\tplan pre-analyze on new database" } ]
[ { "msg_contents": "I've been Googling for SQL tuning help for Postgres but the pickings \nhave been rather slim. Maybe I'm using the wrong search terms. I'm \ntrying to improve the performance of the following query and would be \ngrateful for any hints, either directly on the problem at hand, or to \nresources I can read to find out more about how to do this. In the \npast I have fixed most problems by adding indexes to get rid of \nsequential scans, but in this case it appears to be the hash join and \nthe nested loops that are taking up all the time and I don't really \nknow what to do about that. In Google I found mostly references from \npeople wanting to use a hash join to *fix* a performance problem, not \ndeal with it creating one...\n\nMy Postgres version is 8.3.3, on Linux.\n\nThanks in advance,\n\njanine\n\niso=# explain analyze select a.item_id,\niso-#\niso-# \ncontent_item__get_best_revision(a.item_id) as revision_id,\niso-# \ncontent_item__get_latest_revision(a.item_id) as last_revision_id,\niso-# \ncontent_revision__get_number(a.article_id) as revision_no,\niso-# (select count(*) from cr_revisions \nwhere item_id=a.item_id) as revision_count,\niso-#\niso-# -- Language support\niso-# b.lang_id,\niso-# b.lang_key,\niso-# (case when b.lang_key = 'big5' then \n'#D7D7D7' else '#ffffff' end) as tr_bgcolor,\niso-# coalesce(dg21_item_langs__rel_lang \n(b.lang_id,'gb2312'),'0') as gb_item_id,\niso-# coalesce(dg21_item_langs__rel_lang \n(b.lang_id,'iso-8859-1'),'0') as eng_item_id,\niso-#\niso-# -- user defined data\niso-# a.article_id,\niso-# a.region_id,\niso-# a.author,\niso-# a.archive_status,\niso-# a.article_status,\niso-# case when a.archive_status='t'\niso-# then '<font color=#808080>never \nexpire</font>'\niso-# else to_char(a.archive_date, \n'YYYY年MM月DD日')\niso-# end as archive_date,\niso-#\niso-# -- Standard data\niso-# a.article_title,\niso-# a.article_desc,\niso-# a.creation_user,\niso-# a.creation_ip,\niso-# a.modifying_user,\niso-#\niso-# -- Pretty format data\niso-# a.item_creator,\niso-#\niso-# -- Other data\niso-# a.live_revision,\niso-# to_char(a.publish_date, 'YYYY年MM月 \nDD日') as publish_date,\niso-# to_char(a.creation_date, 'DD/MM/YYYY \nHH:MI AM') as creation_date,\niso-#\niso-# case when article_status='approved'\niso-# then 'admin content, auto \napproved'\niso-# when article_status='unapproved'\niso-# then (select approval_text\niso(# from dg21_approval\niso(# where \nrevision_id=a.article_id\niso(# and \napproval_status='f' order by approval_date desc limit 1)\niso-# else ''\niso-# end as approval_text\niso-#\niso-# from dg21_article_items a, \ndg21_item_langs b\niso-# where a.item_id = b.item_id\niso-#\niso-# order by b.lang_id desc, a.item_id\niso-# limit 21 offset 0;\n\n QUERY \n PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3516.97..3516.98 rows=1 width=1245) (actual \ntime=195948.132..195948.250 rows=21 loops=1)\n -> Sort (cost=3516.97..3516.98 rows=1 width=1245) (actual \ntime=195948.122..195948.165 rows=21 loops=1)\n Sort Key: b.lang_id, ci.item_id\n Sort Method: top-N heapsort Memory: 24kB\n -> Nested Loop (cost=719.67..3516.96 rows=1 width=1245) \n(actual time=346.687..195852.741 rows=4159 loops=1)\n -> Nested Loop (cost=719.67..3199.40 rows=1 \nwidth=413) (actual time=311.422..119467.334 rows=4159 loops=1)\n -> Nested Loop (cost=719.67..3198.86 rows=1 \nwidth=400) (actual time=292.951..1811.051 rows=4159 loops=1)\n -> Hash Join (cost=719.67..3197.98 \nrows=1 width=352) (actual time=292.832..777.290 rows=4159 loops=1)\n Hash Cond: (cr.item_id = ci.item_id)\n Join Filter: ((ci.live_revision = \ncr.revision_id) OR ((ci.live_revision IS NULL) AND (cr.revision_id = \ncontent_item__get_latest_revision(ci.item_id))))\n -> Hash Join (cost=154.38..1265.24 \nrows=4950 width=348) (actual time=74.789..375.580 rows=4950 loops=1)\n Hash Cond: (cr.revision_id = \nox.article_id)\n -> Seq Scan on cr_revisions \ncr (cost=0.00..913.73 rows=16873 width=321) (actual \ntime=0.058..71.539 rows=16873 loops=1)\n -> Hash (cost=92.50..92.50 \nrows=4950 width=27) (actual time=74.607..74.607 rows=4950 loops=1)\n -> Seq Scan on \ndg21_articles ox (cost=0.00..92.50 rows=4950 width=27) (actual \ntime=0.071..18.604 rows=4950 loops=1)\n -> Hash (cost=384.02..384.02 \nrows=14502 width=8) (actual time=217.789..217.789 rows=14502 loops=1)\n -> Seq Scan on cr_items ci \n(cost=0.00..384.02 rows=14502 width=8) (actual time=0.051..137.988 \nrows=14502 loops=1)\n -> Index Scan using acs_objects_pk on \nacs_objects ao (cost=0.00..0.88 rows=1 width=56) (actual \ntime=0.223..0.229 rows=1 loops=4159)\n Index Cond: (ao.object_id = \ncr.revision_id)\n -> Index Scan using persons_pk on persons ps \n(cost=0.00..0.27 rows=1 width=17) (actual time=0.017..0.023 rows=1 \nloops=4159)\n Index Cond: (ps.person_id = \nao.creation_user)\n -> Index Scan using dg21_item_langs_id_key on \ndg21_item_langs b (cost=0.00..8.27 rows=1 width=15) (actual \ntime=0.526..0.537 rows=1 loops=4159)\n Index Cond: (b.item_id = ci.item_id)\n SubPlan\n -> Limit (cost=297.21..297.22 rows=1 width=29) \n(never executed)\n -> Sort (cost=297.21..297.22 rows=1 \nwidth=29) (never executed)\n Sort Key: dg21_approval.approval_date\n -> Seq Scan on dg21_approval \n(cost=0.00..297.20 rows=1 width=29) (never executed)\n Filter: ((revision_id = $2) AND \n((approval_status)::text = 'f'::text))\n -> Aggregate (cost=10.77..10.78 rows=1 width=0) \n(actual time=0.051..0.053 rows=1 loops=4159)\n -> Index Scan using cr_revisions_item_id_idx \non cr_revisions (cost=0.00..10.77 rows=2 width=0) (actual \ntime=0.019..0.024 rows=1 loops=4159)\n Index Cond: (item_id = $0)\n Total runtime: 195949.928 ms\n(33 rows)\n\n---\nJanine Sisk\nPresident/CEO of furfly, LLC\n503-693-6407\n\n\n\n\n", "msg_date": "Wed, 3 Jun 2009 13:54:07 -0700", "msg_from": "Janine Sisk <[email protected]>", "msg_from_op": true, "msg_subject": "Pointers needed on optimizing slow SQL statements" }, { "msg_contents": "Janine Sisk <[email protected]> writes:\n> I've been Googling for SQL tuning help for Postgres but the pickings \n> have been rather slim. Maybe I'm using the wrong search terms. I'm \n> trying to improve the performance of the following query and would be \n> grateful for any hints, either directly on the problem at hand, or to \n> resources I can read to find out more about how to do this. In the \n> past I have fixed most problems by adding indexes to get rid of \n> sequential scans, but in this case it appears to be the hash join and \n> the nested loops that are taking up all the time and I don't really \n> know what to do about that. In Google I found mostly references from \n> people wanting to use a hash join to *fix* a performance problem, not \n> deal with it creating one...\n\nThe hashjoin isn't creating any problem that I can see. What's\nhurting you is the nestloops above it, which need to be replaced with\nsome other join technique. The planner is going for a nestloop because\nit expects only one row out of the hashjoin, which is off by more than\nthree orders of magnitude :-(. So in short, your problem is poor\nestimation of the selectivity of this condition:\n\n> Join Filter: ((ci.live_revision = \n> cr.revision_id) OR ((ci.live_revision IS NULL) AND (cr.revision_id = \n> content_item__get_latest_revision(ci.item_id))))\n\nIt's hard to tell why the estimate is so bad, though, since you didn't\nprovide any additional information. Perhaps increasing the statistics\ntarget for these columns (or the whole database) would help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Jun 2009 17:42:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements " }, { "msg_contents": "Ok, I will look into gathering better statistics. This is the first \ntime I've had a significant problem with a PG database, so this is \nuncharted territory for me.\n\nIf there is more info I could give that would help, please be more \nspecific about what you need and I will attempt to do so.\n\nThanks!\n\njanine\n\nOn Jun 3, 2009, at 2:42 PM, Tom Lane wrote:\n\n> Janine Sisk <[email protected]> writes:\n>> I've been Googling for SQL tuning help for Postgres but the pickings\n>> have been rather slim. Maybe I'm using the wrong search terms. I'm\n>> trying to improve the performance of the following query and would be\n>> grateful for any hints, either directly on the problem at hand, or to\n>> resources I can read to find out more about how to do this. In the\n>> past I have fixed most problems by adding indexes to get rid of\n>> sequential scans, but in this case it appears to be the hash join and\n>> the nested loops that are taking up all the time and I don't really\n>> know what to do about that. In Google I found mostly references from\n>> people wanting to use a hash join to *fix* a performance problem, not\n>> deal with it creating one...\n>\n> The hashjoin isn't creating any problem that I can see. What's\n> hurting you is the nestloops above it, which need to be replaced with\n> some other join technique. The planner is going for a nestloop \n> because\n> it expects only one row out of the hashjoin, which is off by more than\n> three orders of magnitude :-(. So in short, your problem is poor\n> estimation of the selectivity of this condition:\n>\n>> Join Filter: ((ci.live_revision =\n>> cr.revision_id) OR ((ci.live_revision IS NULL) AND (cr.revision_id =\n>> content_item__get_latest_revision(ci.item_id))))\n>\n> It's hard to tell why the estimate is so bad, though, since you didn't\n> provide any additional information. Perhaps increasing the statistics\n> target for these columns (or the whole database) would help.\n>\n> \t\t\tregards, tom lane\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n---\nJanine Sisk\nPresident/CEO of furfly, LLC\n503-693-6407\n\n\n\n\n", "msg_date": "Wed, 3 Jun 2009 15:04:47 -0700", "msg_from": "Janine Sisk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements " }, { "msg_contents": "On Wed, Jun 3, 2009 at 6:04 PM, Janine Sisk <[email protected]> wrote:\n> Ok, I will look into gathering better statistics.  This is the first time\n> I've had a significant problem with a PG database, so this is uncharted\n> territory for me.\n>\n> If there is more info I could give that would help, please be more specific\n> about what you need and I will attempt to do so.\n>\n> Thanks!\n>\n> janine\n\nYou might find it helpful to try to inline the\ncontent_item__get_latest_revision function call. I'm not sure whether\nthat's a SQL function or what, but the planner isn't always real\nclever about things like that. If you can redesign things so that all\nthe logic is in the actual query, you may get better results.\n\nBut, we're not always real clever about selectivity. Sometimes you\nhave to fake the planner out, as discussed here.\n\nhttp://archives.postgresql.org/pgsql-performance/2009-06/msg00023.php\n\nActually, I had to do this today on a production application. In my\ncase, the planner thought that a big OR clause was not very selective,\nso it figured it wouldn't have to scan very far through the outer side\nbefore it found enough rows to satisfy the LIMIT clause. Therefore it\nmaterialized the inner side instead of hashing it, and when the\nselectivity estimate turned out to be wrong, it took 220 seconds to\nexecute. I added a fake join condition of the form a || b = a || b,\nwhere a and b were on different sides of the join, and now it hashes\nthe inner side and takes < 100 ms.\n\nFortunately, these kinds of problems are fairly rare, but they can be\nextremely frustrating to debug. With any kind of query debugging, the\nfirst question to ask yourself is \"Are any of my selectivity estimates\nway off?\". If the answer to that question is no, you should then ask\n\"Where is all the time going in this plan?\". If the answer to the\nfirst question is yes, though, your time is usually better spent\nfixing that problem, because once you do, the plan will most likely\nchange to something a lot better.\n\n...Robert\n", "msg_date": "Wed, 3 Jun 2009 21:21:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements" }, { "msg_contents": "I'm sorry if this is a stupid question, but... I changed \ndefault_statistics_target from the default of 10 to 100, restarted PG, \nand then ran \"vacuumdb -z\" on the database. The plan is exactly the \nsame as before. Was I supposed to do something else? Do I need to \nincrease it even further? This is an overloaded system to start with, \nso I'm being fairly conservative with what I change.\n\nthanks,\n\njanine\n\nOn Jun 3, 2009, at 2:42 PM, Tom Lane wrote:\n\n> Janine Sisk <[email protected]> writes:\n>> I've been Googling for SQL tuning help for Postgres but the pickings\n>> have been rather slim. Maybe I'm using the wrong search terms. I'm\n>> trying to improve the performance of the following query and would be\n>> grateful for any hints, either directly on the problem at hand, or to\n>> resources I can read to find out more about how to do this. In the\n>> past I have fixed most problems by adding indexes to get rid of\n>> sequential scans, but in this case it appears to be the hash join and\n>> the nested loops that are taking up all the time and I don't really\n>> know what to do about that. In Google I found mostly references from\n>> people wanting to use a hash join to *fix* a performance problem, not\n>> deal with it creating one...\n>\n> The hashjoin isn't creating any problem that I can see. What's\n> hurting you is the nestloops above it, which need to be replaced with\n> some other join technique. The planner is going for a nestloop \n> because\n> it expects only one row out of the hashjoin, which is off by more than\n> three orders of magnitude :-(. So in short, your problem is poor\n> estimation of the selectivity of this condition:\n>\n>> Join Filter: ((ci.live_revision =\n>> cr.revision_id) OR ((ci.live_revision IS NULL) AND (cr.revision_id =\n>> content_item__get_latest_revision(ci.item_id))))\n>\n> It's hard to tell why the estimate is so bad, though, since you didn't\n> provide any additional information. Perhaps increasing the statistics\n> target for these columns (or the whole database) would help.\n>\n> \t\t\tregards, tom lane\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n---\nJanine Sisk\nPresident/CEO of furfly, LLC\n503-693-6407\n\n\n\n\n", "msg_date": "Wed, 3 Jun 2009 19:32:18 -0700", "msg_from": "Janine Sisk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements " }, { "msg_contents": "On Wed, Jun 3, 2009 at 8:32 PM, Janine Sisk <[email protected]> wrote:\n> I'm sorry if this is a stupid question, but...  I changed\n> default_statistics_target from the default of 10 to 100, restarted PG, and\n> then ran \"vacuumdb -z\" on the database.  The plan is exactly the same as\n> before.  Was I supposed to do something else?  Do I need to increase it even\n> further?  This is an overloaded system to start with, so I'm being fairly\n> conservative with what I change.\n\nNo need to restart pg, just analyze is good enough (vacuumdb -z will do).\n\nAfter that, compare your explain analyze output and see if the\nestimates are any better. If they're better but not good enough, try\nincreasing stats target to something like 500 or 1000 (max is 1000)\nand reanalyze and see if that helps. If not, post the new explain\nanalyze and we'll take another whack at it.\n", "msg_date": "Wed, 3 Jun 2009 23:31:13 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements" }, { "msg_contents": "On 6/3/09 7:32 PM, Janine Sisk wrote:\n> I'm sorry if this is a stupid question, but... I changed\n> default_statistics_target from the default of 10 to 100, restarted PG,\n> and then ran \"vacuumdb -z\" on the database. The plan is exactly the same\n> as before. Was I supposed to do something else? Do I need to increase it\n> even further? This is an overloaded system to start with, so I'm being\n> fairly conservative with what I change.\n\nIt's possible that it won't change the plan; 100 is often not enough to \nchange the statistics.\n\nTry changing, in a superuser session, default_statistics_target to 400 \nand just ANALYZing the one table, and see if that changes the plan. If \nso, you'll want to increase the statistics setting on the filtered \ncolumns on that table.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 04 Jun 2009 12:43:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements" }, { "msg_contents": "\nOn Wed, 2009-06-03 at 21:21 -0400, Robert Haas wrote:\n\n> But, we're not always real clever about selectivity. Sometimes you\n> have to fake the planner out, as discussed here.\n> \n> http://archives.postgresql.org/pgsql-performance/2009-06/msg00023.php\n> \n> Actually, I had to do this today on a production application. In my\n> case, the planner thought that a big OR clause was not very selective,\n> so it figured it wouldn't have to scan very far through the outer side\n> before it found enough rows to satisfy the LIMIT clause. Therefore it\n> materialized the inner side instead of hashing it, and when the\n> selectivity estimate turned out to be wrong, it took 220 seconds to\n> execute. I added a fake join condition of the form a || b = a || b,\n> where a and b were on different sides of the join, and now it hashes\n> the inner side and takes < 100 ms.\n> \n> Fortunately, these kinds of problems are fairly rare, but they can be\n> extremely frustrating to debug. With any kind of query debugging, the\n> first question to ask yourself is \"Are any of my selectivity estimates\n> way off?\". If the answer to that question is no, you should then ask\n> \"Where is all the time going in this plan?\". If the answer to the\n> first question is yes, though, your time is usually better spent\n> fixing that problem, because once you do, the plan will most likely\n> change to something a lot better.\n\nThe Function Index solution works, but it would be much better if we\ncould get the planner to remember certain selectivities.\n\nI'm thinking a command like\n\n\tANALYZE foo [WHERE .... ]\n\nwhich would specifically analyze the selectivity of the given WHERE\nclause for use in queries.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Sat, 06 Jun 2009 09:50:52 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements" }, { "msg_contents": "Hi,\n\nLe 6 juin 09 � 10:50, Simon Riggs a �crit :\n> On Wed, 2009-06-03 at 21:21 -0400, Robert Haas wrote:\n>> But, we're not always real clever about selectivity. Sometimes you\n>> have to fake the planner out, as discussed here.\n[...]\n>\n>> Fortunately, these kinds of problems are fairly rare, but they can be\n>> extremely frustrating to debug. With any kind of query debugging, \n>> the\n>> first question to ask yourself is \"Are any of my selectivity \n>> estimates\n>> way off?\". If the answer to that question is no, you should then ask\n>> \"Where is all the time going in this plan?\". If the answer to the\n>> first question is yes, though, your time is usually better spent\n>> fixing that problem, because once you do, the plan will most likely\n>> change to something a lot better.\n>\n> The Function Index solution works, but it would be much better if we\n> could get the planner to remember certain selectivities.\n>\n> I'm thinking a command like\n>\n> \tANALYZE foo [WHERE .... ]\n>\n> which would specifically analyze the selectivity of the given WHERE\n> clause for use in queries.\n\nI don't know the stats subsystem well enough to judge by myself how \ngood this idea is, but I have some remarks about it:\n - it looks good :)\n - where to store the clauses to analyze?\n - do we want to tackle JOIN selectivity patterns too (more than one \ntable)?\n\nAn extension to the ANALYZE foo WHERE ... idea would be then to be \nable to analyze random SQL, which could lead to allow for maintaining \nVIEW stats. Is this already done, and if not, feasible and a good idea?\n\nThis way one could define a view and have the system analyze the \nclauses and selectivity of joins etc, then the hard part is for the \nplanner to be able to use those in user queries... mmm... maybe this \nisn't going to help much?\n\nRegards,\n-- \ndim", "msg_date": "Sun, 7 Jun 2009 20:28:10 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements" }, { "msg_contents": "On Sat, Jun 6, 2009 at 4:50 AM, Simon Riggs<[email protected]> wrote:\n> The Function Index solution works, but it would be much better if we\n> could get the planner to remember certain selectivities.\n\nI agree.\n\n> I'm thinking a command like\n>\n>        ANALYZE foo [WHERE .... ]\n>\n> which would specifically analyze the selectivity of the given WHERE\n> clause for use in queries.\n\nI think that's probably not the best syntax, because we don't want to\njust do it once; we want to make it a persistent property of the table\nso that every future ANALYZE run picks it up. Maybe something like:\n\nALTER TABLE <table> ADD ANALYZE <name> (<clause>)\nALTER TABLE <table> DROP ANALYZE <name>\n\n(I'm not in love with this so feel free to suggest improvements.)\n\nOne possible problem with this kind of thing is that it could be\ninconvenient if the number of clauses that you need to analyze is\nlarge. For example, suppose you have a table called \"object\" with a\ncolumn called \"type_id\". It's not unlikely that the histograms and\nMCVs for many of the columns in that table will be totally different\ndepending on the value of type_id. There might be enough different\nWHERE clauses that capturing their selectivity individually wouldn't\nbe practical, or at least not convenient.\n\nOne possible alternative would be to change the meaning of the\n<clause>, so that instead of just asking the planner to gather\nselectivity on that one clause, it asks the planner to gather a whole\nseparate set of statistics for the case where that clause holds. Then\nwhen we plan a query, we set the theorem-prover to work on the clauses\n(a la constraint exclusion) and see if any of them are implied by the\nquery. If so, we can use that set of statistics in lieu of the global\ntable statistics. There is the small matter of figuring out what to\ndo if we added multiple clauses and more than one is provable, but\n<insert hand-waving here>.\n\nIt would also be good to do this automatically whenever a partial\nindex is present.\n\n...Robert\n", "msg_date": "Sun, 7 Jun 2009 16:28:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements" }, { "msg_contents": "I'd prefer ALTER VIEW <name> SET ANALYZE=true; or CREATE/DROP ANALYZE <SQL>;\nAlso it should be possible to change statistics target for analyzed columns.\n\nSuch a statement would allow to analyze multi-table correlations. Note that\nfor view planner should be able to use correlation information even for\nqueries that do not use view, but may benefit from the information.\n\nI'd prefer ALTER VIEW <name> SET ANALYZE=true; or CREATE/DROP ANALYZE <SQL>;Also it should be possible to change statistics target for analyzed columns.Such a statement would allow to analyze multi-table correlations. Note that for view planner should be able to use correlation information even for queries that do not use view, but may benefit from the information.", "msg_date": "Tue, 9 Jun 2009 09:58:51 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements" }, { "msg_contents": "Віталій Тимчишин <[email protected]> writes:\n\n> I'd prefer ALTER VIEW <name> SET ANALYZE=true; or CREATE/DROP ANALYZE <SQL>;\n> Also it should be possible to change statistics target for analyzed\n> columns.\n\nYeah, my idea was ALTER VIEW <name> ENABLE ANALYZE; but that's an easy\npoint to solve if the idea proves helpful.\n\n> Such a statement would allow to analyze multi-table correlations. Note\n> that for view planner should be able to use correlation information\n> even for queries that do not use view, but may benefit from the\n> information.\n\nThat sounds like the hard part of it, but maybe our lovely geniuses will\ncome back and tell: \"oh, you can do it this way, easy enough\". :)\n\nRegards,\n-- \ndim\n", "msg_date": "Tue, 09 Jun 2009 10:31:25 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pointers needed on optimizing slow SQL statements" } ]
[ { "msg_contents": "\nHi,\n\nWe have a problem with some of our query plans. One of our tables is quite volatile, but postgres always uses the last statistics snapshot from the last time it was analyzed for query planning. Is there a way to tell postgres that it should not trust the statistics for this table? Basically we want it to assume that there may be 0, 1 or 100,000 entries coming out from a query on that table at any time, and that it should not make any assumptions.\n\nThanks,\nBrian\n ========================\nBrian Herlihy\nTrellian Pty Ltd\n+65 67534396 (Office)\n+65 92720492 (Handphone)\n========================\n\n", "msg_date": "Wed, 3 Jun 2009 18:43:06 -0700 (PDT)", "msg_from": "Brian Herlihy <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan issues - volatile tables" }, { "msg_contents": "Brian Herlihy wrote:\n> We have a problem with some of our query plans. One of our\n>tables is quite volatile, but postgres always uses the last\n>statistics snapshot from the last time it was analyzed for query\n>planning. Is there a way to tell postgres that it should not\n>trust the statistics for this table? Basically we want it to\n>assume that there may be 0, 1 or 100,000 entries coming out from\n>a query on that table at any time, and that it should not make\n>any assumptions.> \n\nI had a similar problem, and just changed my application to do an analyze either just before the query, or just after a major update to the table. Analyze is very fast, almost always a orders of magnitude faster than the time lost to a poor query plan.\n\nCraig\n", "msg_date": "Thu, 04 Jun 2009 09:04:12 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan issues - volatile tables" } ]
[ { "msg_contents": "\nRevisiting the thread a month back or so, I'm still investigating \nperformance problems with GiST indexes in Postgres.\n\nLooking at http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Open_Items I'd \nlike to clarify the contrib/seg issue. Contrib/seg is vulnerable to \npathological behaviour which is fixed by my second patch, which can be \nviewed as complete. Contrib/cube, being multi-dimensional, is not affected \nto any significant degree, so should not need alteration.\n\nA second quite distinct issue is the general performance of GiST indexes \nwhich is also mentioned in the old thread linked from Open Items. For \nthat, we have a test case at \nhttp://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php for \nbtree_gist indexes. I have a similar example with the bioseg GiST index. I \nhave completely reimplemented the same algorithms in Java for algorithm\ninvestigation and instrumentation purposes, and it runs about a hundred \ntimes faster than in Postgres. I think this is a problem, and I'm willing \nto do some investigation to try and solve it.\n\nDo you have a recommendation for how to go about profiling Postgres, what \nprofiler to use, etc? I'm running on Debian Linux x86_64.\n\nMatthew\n\n-- \n Jadzia: Don't forget the 34th rule of acquisition: Peace is good for business.\n Quark: That's the 35th.\n Jadzia: Oh yes, that's right. What's the 34th again?\n Quark: War is good for business. It's easy to get them mixed up.\n", "msg_date": "Thu, 4 Jun 2009 17:33:14 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "GiST index performance" }, { "msg_contents": "On Thu, Jun 4, 2009 at 12:33 PM, Matthew Wakeling<[email protected]> wrote:\n> Do you have a recommendation for how to go about profiling Postgres, what\n> profiler to use, etc? I'm running on Debian Linux x86_64.\n\nI mostly compile with --enable-profiling and use gprof. I know Tom\nLane has had success with oprofile for quick and dirty measurements\nbut I haven't quite figured out how to make all the bits work for that\nyet.\n\n...Robert\n", "msg_date": "Fri, 5 Jun 2009 15:42:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Fri, 5 Jun 2009, Robert Haas wrote:\n> On Thu, Jun 4, 2009 at 12:33 PM, Matthew Wakeling<[email protected]> wrote:\n>> Do you have a recommendation for how to go about profiling Postgres, what\n>> profiler to use, etc? I'm running on Debian Linux x86_64.\n>\n> I mostly compile with --enable-profiling and use gprof. I know Tom\n> Lane has had success with oprofile for quick and dirty measurements\n> but I haven't quite figured out how to make all the bits work for that\n> yet.\n\nOkay, I recompiled Postgres 8.4 beta 2 with profiling, installed \nbtree_gist and bioseg, and did a large multi-column (btree_gist, bioseg) \nindex search.\n\nEXPLAIN ANALYSE SELECT *\nFROM location l1, location l2\nWHERE l1.objectid = l2.objectid\n AND bioseg_create(l1.intermine_start, l1.intermine_end) && bioseg_create(l2.intermine_start, l2.intermine_end);\n\n QUERY PLAN\n-----------------------------------------------------------------------\n Nested Loop (cost=0.01..9292374.77 rows=19468831 width=130)\n (actual time=0.337..24538315.569 rows=819811624 loops=1)\n -> Seq Scan on location l1\n (cost=0.00..90092.17 rows=4030117 width=65)\n (actual time=0.033..2561.142 rows=4030122 loops=1)\n -> Index Scan using location_object_bioseg on location l2\n (cost=0.01..1.58 rows=35 width=65)\n (actual time=0.467..5.990 rows=203 loops=4030122)\n Index Cond: ((l2.objectid = l1.objectid) AND (bioseg_create(l1.intermine_start, l1.intermine_end) && bioseg_create(l2.intermine_start, l2.intermine_end)))\n Total runtime: 24613918.894 ms\n(5 rows)\n\nHere is the top of the profiling result:\n\nFlat profile:\n\nEach sample counts as 0.01 seconds.\n % cumulative self self total\n time seconds seconds calls Ks/call Ks/call name\n 35.41 2087.04 2087.04 823841746 0.00 0.00 gistnext\n 15.36 2992.60 905.56 8560743382 0.00 0.00 fmgr_oldstyle\n 8.65 3502.37 509.77 3641723296 0.00 0.00 FunctionCall5\n 7.08 3919.87 417.50 3641723296 0.00 0.00 gistdentryinit\n 5.03 4216.59 296.72 6668 0.00 0.00 DirectFunctionCall1\n 3.84 4443.05 226.46 3641724371 0.00 0.00 FunctionCall1\n 2.32 4579.94 136.89 1362367466 0.00 0.00 hash_search_with_hash_value\n 1.89 4691.15 111.21 827892583 0.00 0.00 FunctionCall2\n 1.83 4799.27 108.12 FunctionCall6\n 1.77 4903.56 104.30 2799321398 0.00 0.00 LWLockAcquire\n 1.45 4989.24 85.68 1043922430 0.00 0.00 PinBuffer\n 1.37 5070.15 80.91 823844102 0.00 0.00 index_getnext\n 1.33 5148.29 78.15 1647683886 0.00 0.00 slot_deform_tuple\n 0.95 5204.36 56.07 738604164 0.00 0.00 heap_page_prune_opt\n\n\nThe final cumulative time is 5894.06 seconds, which doesn't seem to match \nthe query run time at all. Also, no calls to anything including \"bioseg\" \nin the name are recorded, although they are definitely called as the GiST \nsupport functions for that data type.\n\nCould someone give me a hand decyphering this result? It seems from this \nthat the time is spent in the gistnext function (in \nsrc/backend/access/gist/gistget.c) and not its children. However, it's \nquite a large function containing several loops - is there a way to make \nthe profiling result more specific?\n\nMatthew\n\n-- \n If you let your happiness depend upon how somebody else feels about you,\n now you have to control how somebody else feels about you. -- Abraham Hicks\n", "msg_date": "Wed, 10 Jun 2009 12:46:28 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> The final cumulative time is 5894.06 seconds, which doesn't seem to match \n> the query run time at all.\n\nDepending on the platform you're using, gprof might have the wrong idea\nabout the kernel's tick rate, leading to its numbers being some multiple\nor fraction of the true elapsed time. I had that problem awhile back\nhttps://bugzilla.redhat.com/show_bug.cgi?id=151763\nalthough I'd like to think it's fixed on all recent Linuxen. However,\nyour bigger problem is here:\n\n> Also, no calls to anything including \"bioseg\" \n> in the name are recorded, although they are definitely called as the GiST \n> support functions for that data type.\n\nI have never had any success in getting gprof to profile functions that\nare in loadable libraries, which of course is exactly what you need to do\nhere. I have found hints on the web claiming that it's possible, but\nthey don't work for me. gprof just ignores both the functions and the\ntime spent in them.\n\nPersonally I generally use oprofile these days, because it doesn't have\nthat problem and also doesn't require any special build options. If you\ndon't have that available to you, the best bet is probably to make a\ntest build of Postgres in which your functions are linked directly into\nthe main postgres executable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jun 2009 12:51:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Wed, 10 Jun 2009, Tom Lane wrote:\n>> Also, no calls to anything including \"bioseg\"\n>> in the name are recorded, although they are definitely called as the GiST\n>> support functions for that data type.\n>\n> I have never had any success in getting gprof to profile functions that\n> are in loadable libraries, which of course is exactly what you need to do\n> here.\n\nThat sucks. However, as another observation, no calls to \"gistfindnext\" \nare recorded in the profile either, and that's in the same source file as \n\"gistnext\" which is recorded. Could it have been inlined? Shouldn't \ninlining be switched off on a profiling build?\n\n> ...the best bet is probably to make a test build of Postgres in which \n> your functions are linked directly into the main postgres executable.\n\nI'll give that a try. Oprofile scares me with the sheer number of options.\n\nMatthew\n\n-- \n Prolog doesn't have enough parentheses. -- Computer Science Lecturer\n", "msg_date": "Thu, 11 Jun 2009 13:37:54 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> That sucks. However, as another observation, no calls to \"gistfindnext\" \n> are recorded in the profile either, and that's in the same source file as \n> \"gistnext\" which is recorded. Could it have been inlined?\n\nProbably.\n\n> Shouldn't inlining be switched off on a profiling build?\n\nWhy? You generally want to profile the code's actual behavior, or as\nnear as you can get to observing that. Defeating compiler optimizations\ndoesn't sound like something that -pg should do on its own. If you\nreally want it, there's a switch for it.\n\n> Oprofile scares me with the sheer number of options.\n\nYou can ignore practically all of them; the defaults are pretty sane.\nThe recipe I usually follow is:\n\n\nInitial setup (only needed once per system boot):\n\nsudo opcontrol --init\nsudo opcontrol --setup --no-vmlinux\n\n(If you need detail about what the kernel is doing, you need kernel\ndebug symbols and then specify them on the previous line)\n\nStart/stop profiling\n\nsudo opcontrol --start\nsudo opcontrol --reset\n... exercise your debug-enabled program here ...\nsudo opcontrol --dump ; sudo opcontrol --shutdown\n\nThe test case should run at least a minute or two to get numbers with\nreasonable accuracy.\n\nAnalysis:\n\nopreport --long-filenames | more\n\nopreport -l image:/path/to/postgres | more\n\nif you really want detail:\n\nopannotate --source /path/to/postgres >someplace\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Jun 2009 09:27:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Thu, 11 Jun 2009, Tom Lane wrote:\n>> Oprofile scares me with the sheer number of options.\n>\n> You can ignore practically all of them; the defaults are pretty sane.\n\nThanks, that was helpful. Here is the top of opreport --long-filenames:\n\nCPU: Core 2, speed 1998 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000\nCPU_CLK_UNHALT...|\n samples| %|\n------------------\n 5114464 61.5404 /lib/libc-2.9.so\n 1576829 18.9734 /changeable/pgsql_8.4_profiling/bin/postgres\n CPU_CLK_UNHALT...|\n samples| %|\n ------------------\n 1572346 99.7157 /changeable/pgsql_8.4_profiling/bin/postgres\n 4482 0.2842 [vdso] (tgid:13593 range:0x7fff8dbff000-0x7fff8dc00000)\n 1 6.3e-05 [vdso] (tgid:13193 range:0x7fff8dbff000-0x7fff8dc00000)\n 409534 4.9278 /no-vmlinux\n 309990 3.7300 /changeable/pgsql_8.4_profiling/lib/btree_gist.so\n 203684 2.4509 /changeable/pgsql_8.4_profiling/lib/bioseg.so\n\nSo it seems that btree_gist and bioseg are not using that much CPU at all, \ncompared to core postgres code. In fact, the majority of time seems to be \nspent in libc. Unfortunately my libc doesn't have any debugging symbols.\n\nAnyway, running opannotate seems to make it clear that time *is* spent in \nthe gistnext function, but almost all of that is in children of the \nfunction. Lots of time is actually spent in fmgr_oldstyle though.\n\nHere is the top of opreport -l:\n\nCPU: Core 2, speed 1998 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000\nsamples % image name app name symbol name\n5114464 61.5404 libc-2.9.so libc-2.9.so (no symbols)\n496215 5.9708 postgres postgres gistnext\n409534 4.9278 no-vmlinux no-vmlinux (no symbols)\n404037 4.8616 postgres postgres fmgr_oldstyle\n170079 2.0465 btree_gist.so btree_gist.so gbt_int4_consistent\n160016 1.9254 postgres postgres gistdentryinit\n153266 1.8442 nvidia_drv.so nvidia_drv.so (no symbols)\n152463 1.8345 postgres postgres FunctionCall5\n149374 1.7974 postgres postgres FunctionCall1\n131112 1.5776 libxul.so libxul.so (no symbols)\n120871 1.4544 postgres postgres .plt\n94506 1.1372 bioseg.so bioseg.so bioseg_gist_consistent\n\nI'm guessing my next step is to install a version of libc with debugging \nsymbols?\n\nMatthew\n\n-- \n Some people, when confronted with a problem, think \"I know, I'll use regular\n expressions.\" Now they have two problems. -- Jamie Zawinski\n", "msg_date": "Thu, 11 Jun 2009 16:07:12 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> So it seems that btree_gist and bioseg are not using that much CPU at all, \n> compared to core postgres code. In fact, the majority of time seems to be \n> spent in libc. Unfortunately my libc doesn't have any debugging symbols.\n\nhmm ... memcpy or qsort maybe?\n\n> Anyway, running opannotate seems to make it clear that time *is* spent in \n> the gistnext function, but almost all of that is in children of the \n> function. Lots of time is actually spent in fmgr_oldstyle though.\n\nSo it'd be worth converting your functions to V1 style.\n\n> I'm guessing my next step is to install a version of libc with debugging \n> symbols?\n\nYeah, if you want to find out what's happening in libc, that's what you\nneed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Jun 2009 11:39:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Thu, 11 Jun 2009, Tom Lane wrote:\n> So it'd be worth converting your functions to V1 style.\n\nDoes that produce a significant reduction in overhead? (You'll probably \nsay \"yes, that's the whole point\").\n\n> hmm ... memcpy or qsort maybe?\n\nSurprise:\n\nCPU: Core 2, speed 1998 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000\nsamples % image name app name symbol name\n3005354 40.2868 libc-2.9.so libc-2.9.so __mcount_internal\n1195592 16.0269 libc-2.9.so libc-2.9.so mcount\n549998 7.3727 postgres postgres gistnext\n420465 5.6363 postgres postgres fmgr_oldstyle\n376333 5.0447 no-vmlinux no-vmlinux (no symbols)\n210984 2.8282 postgres postgres FunctionCall5\n182509 2.4465 postgres postgres gistdentryinit\n174356 2.3372 btree_gist.so btree_gist.so gbt_int4_consistent\n142829 1.9146 postgres postgres FunctionCall1\n129800 1.7400 postgres postgres .plt\n119180 1.5976 nvidia_drv.so nvidia_drv.so (no symbols)\n96351 1.2916 libxul.so libxul.so (no symbols)\n91726 1.2296 btree_gist.so btree_gist.so gbt_num_consistent\n\nA quick grep in the postgres source for mcount reveals no hits. No idea \nwhat it does - there is no man page for it.\n\nMatthew\n\n-- \n I pause for breath to allow you to get over your shock that I really did cover\n all that in only five minutes... -- Computer Science Lecturer\n", "msg_date": "Thu, 11 Jun 2009 17:42:11 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Thu, 11 Jun 2009, Matthew Wakeling wrote:\n> A quick grep in the postgres source for mcount reveals no hits. No idea what \n> it does - there is no man page for it.\n\nAh - that's part of gprof. I'll recompile without --enable-profiling and \ntry again. Duh.\n\nMatthew\n\n-- \n What goes up must come down. Ask any system administrator.\n", "msg_date": "Thu, 11 Jun 2009 17:44:01 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "\nOkay, I don't know quite what's happening here. Tom, perhaps you could \nadvise. Running opannotate --source, I get this sort of stuff:\n\n/*\n * Total samples for file : \".../postgresql-8.4beta2/src/backend/access/gist/gistget.c\"\n *\n * 6880 0.2680\n */\n\nand then:\n\n :static int64\n :gistnext(IndexScanDesc scan, TIDBitmap *tbm)\n 81 0.0032 :{ /* gistnext total: 420087 16.3649 */\n : Page p;\n\n\n\nThe gistnext total doesn't seem to correspond to the amount I get by \nadding up all the individual lines in gistnest. Moreover, it is greater \nthan the total samples attributed to the whole file, and greater than the \nsamples assigned to all the lines where gistnext is called.\n\nHowever, yes it does seem like fmgr.c accounts for a large proportion of \nsamples. Also, I still seem to be getting mcount, even after recompiling \nwithout --enable-profiling.\n\nCPU: Core 2, speed 1998 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000\nsamples % image name app name symbol name\n460213 17.9280 postgres postgres fmgr_oldstyle\n420087 16.3649 postgres postgres gistnext\n254975 9.9328 postgres postgres FunctionCall5\n239572 9.3327 libc-2.9.so libc-2.9.so mcount\n219963 8.5689 libc-2.9.so libc-2.9.so __mcount_internal\n125674 4.8957 no-vmlinux no-vmlinux (no symbols)\n117184 4.5650 postgres postgres gistdentryinit\n106967 4.1670 btree_gist.so btree_gist.so gbt_int4_consistent\n95677 3.7272 postgres postgres FunctionCall1\n75397 2.9372 bioseg.so bioseg.so bioseg_gist_consistent\n58832 2.2919 btree_gist.so btree_gist.so gbt_num_consistent\n39128 1.5243 bioseg.so bioseg.so bioseg_overlap\n33874 1.3196 libxul.so libxul.so (no symbols)\n32008 1.2469 bioseg.so bioseg.so bioseg_gist_leaf_consistent\n20890 0.8138 nvidia_drv.so nvidia_drv.so (no symbols)\n19321 0.7527 bioseg.so bioseg.so bioseg_gist_decompress\n17365 0.6765 libmozjs.so.1d libmozjs.so.1d (no symbols)\n\nMatthew\n\n-- \n A good programmer is one who looks both ways before crossing a one-way street.\n Considering the quality and quantity of one-way streets in Cambridge, it\n should be no surprise that there are so many good programmers there.\n", "msg_date": "Thu, 11 Jun 2009 18:23:24 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> The gistnext total doesn't seem to correspond to the amount I get by \n> adding up all the individual lines in gistnest.\n\nHmm, hadn't you determined that some other stuff was being inlined into\ngistnext? I'm not really sure how opannotate displays such cases, but\nthis might be an artifact of that.\n\n> However, yes it does seem like fmgr.c accounts for a large proportion of \n> samples. Also, I still seem to be getting mcount, even after recompiling \n> without --enable-profiling.\n\nYou must still have some -pg code in there someplace. Maybe you didn't\nrecompile bioseg.so, or even psql? Remember the libc counts you are\nlooking at are for system-wide usage of libc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Jun 2009 13:34:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "Matthew Wakeling wrote:\n> Okay, I don't know quite what's happening here. Tom, perhaps you could \n> advise. Running opannotate --source, I get this sort of stuff:\n> \n> /*\n> * Total samples for file : \n> \".../postgresql-8.4beta2/src/backend/access/gist/gistget.c\"\n> *\n> * 6880 0.2680\n> */\n> \n> and then:\n> \n> :static int64\n> :gistnext(IndexScanDesc scan, TIDBitmap *tbm)\n> 81 0.0032 :{ /* gistnext total: 420087 16.3649 */\n> : Page p;\n> \n> \n> \n> The gistnext total doesn't seem to correspond to the amount I get by \n> adding up all the individual lines in gistnest. Moreover, it is greater \n> than the total samples attributed to the whole file, and greater than \n> the samples assigned to all the lines where gistnext is called.\n\nthere's another alternative for profiling that you might try if you \ncan't get sensible results out of oprofile - cachegrind (which is part \nof the valgrind toolset).\n\nbasically it runs the code in an emulated environment, but records every \naccess (reads/writes/CPU cycles/cache hits/misses/etc). it's *extremely* \ngood at finding hotspots, even when they are due to 'cache flushing' \nbehavior in your code (for example, trawling a linked list is touching a \nbunch of pages and effectively blowing your CPU cache..)\n\nthere's an associated graphical tool called kcachegrind which takes the \ndumped output and lets you drill down, even to the source code level \n(with cycle count/percentage annotations on the source lines)\n\nall you need to do is compile postgres with debug symbols (full \noptimization ON, otherwise you end up reaching the wrong conclusions).\n\nthere's an example of running valgrind on postgres here:\n\n http://blog.cleverelephant.ca/2008/08/valgrinding-postgis.html\n\nfor cachegrind, you basically need to use 'cachegrind' instead of \n'valgrind', and don't disable optimization when you build..", "msg_date": "Fri, 12 Jun 2009 09:20:15 -0600", "msg_from": "Adam Gundy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Tom Lane wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> I'm guessing my next step is to install a version of libc with debugging \n>> symbols?\n> \n> Yeah, if you want to find out what's happening in libc, that's what you\n> need.\n\nGetting callgraph information from oprofile would also help. Although it \nwon't directly tell what function in libc is being called, you would see \nwhere the calls are coming from, which is usually enough to guess what \nthe libc function is.\n\nYou can also get the oprofile data, including callgraph, into \nkcachegrind, which is *very* helpful. Here's a script I use: \nhttp://roberts.vorpus.org/~njs/op2calltree.py\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 17 Jun 2009 15:46:03 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Thu, 11 Jun 2009, Tom Lane wrote:\n\n> Matthew Wakeling <[email protected]> writes:\n>> Oprofile scares me with the sheer number of options.\n>\n> You can ignore practically all of them; the defaults are pretty sane.\n> The recipe I usually follow is:\n\nAn excellent brain dump from Tom and lots of other good stuff in this \nthread. I just dumped a summary of all the profiling lore discussed onto \nhttp://wiki.postgresql.org/wiki/Profiling_with_OProfile as I don't know \nthat I've ever seen a concise intro to this subject before.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 26 Jun 2009 10:33:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Fri, Jun 26, 2009 at 10:33 AM, Greg Smith<[email protected]> wrote:\n> On Thu, 11 Jun 2009, Tom Lane wrote:\n>\n>> Matthew Wakeling <[email protected]> writes:\n>>>\n>>> Oprofile scares me with the sheer number of options.\n>>\n>> You can ignore practically all of them; the defaults are pretty sane.\n>> The recipe I usually follow is:\n>\n> An excellent brain dump from Tom and lots of other good stuff in this\n> thread.  I just dumped a summary of all the profiling lore discussed onto\n> http://wiki.postgresql.org/wiki/Profiling_with_OProfile as I don't know that\n> I've ever seen a concise intro to this subject before.\n\nNice, thanks!\n\n...Robert\n", "msg_date": "Fri, 26 Jun 2009 11:00:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "\nWas there every any conclusion on this issue?\n\n---------------------------------------------------------------------------\n\nMatthew Wakeling wrote:\n> \n> Revisiting the thread a month back or so, I'm still investigating \n> performance problems with GiST indexes in Postgres.\n> \n> Looking at http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Open_Items I'd \n> like to clarify the contrib/seg issue. Contrib/seg is vulnerable to \n> pathological behaviour which is fixed by my second patch, which can be \n> viewed as complete. Contrib/cube, being multi-dimensional, is not affected \n> to any significant degree, so should not need alteration.\n> \n> A second quite distinct issue is the general performance of GiST indexes \n> which is also mentioned in the old thread linked from Open Items. For \n> that, we have a test case at \n> http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php for \n> btree_gist indexes. I have a similar example with the bioseg GiST index. I \n> have completely reimplemented the same algorithms in Java for algorithm\n> investigation and instrumentation purposes, and it runs about a hundred \n> times faster than in Postgres. I think this is a problem, and I'm willing \n> to do some investigation to try and solve it.\n> \n> Do you have a recommendation for how to go about profiling Postgres, what \n> profiler to use, etc? I'm running on Debian Linux x86_64.\n> \n> Matthew\n> \n> -- \n> Jadzia: Don't forget the 34th rule of acquisition: Peace is good for business.\n> Quark: That's the 35th.\n> Jadzia: Oh yes, that's right. What's the 34th again?\n> Quark: War is good for business. It's easy to get them mixed up.\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 25 Feb 2010 18:42:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Thu, Feb 25, 2010 at 6:42 PM, Bruce Momjian <[email protected]> wrote:\n> Was there every any conclusion on this issue?\n\nI don't think so.\n\n...Robert\n\n> ---------------------------------------------------------------------------\n>\n> Matthew Wakeling wrote:\n>>\n>> Revisiting the thread a month back or so, I'm still investigating\n>> performance problems with GiST indexes in Postgres.\n>>\n>> Looking at http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Open_Items I'd\n>> like to clarify the contrib/seg issue. Contrib/seg is vulnerable to\n>> pathological behaviour which is fixed by my second patch, which can be\n>> viewed as complete. Contrib/cube, being multi-dimensional, is not affected\n>> to any significant degree, so should not need alteration.\n>>\n>> A second quite distinct issue is the general performance of GiST indexes\n>> which is also mentioned in the old thread linked from Open Items. For\n>> that, we have a test case at\n>> http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php for\n>> btree_gist indexes. I have a similar example with the bioseg GiST index. I\n>> have completely reimplemented the same algorithms in Java for algorithm\n>> investigation and instrumentation purposes, and it runs about a hundred\n>> times faster than in Postgres. I think this is a problem, and I'm willing\n>> to do some investigation to try and solve it.\n>>\n>> Do you have a recommendation for how to go about profiling Postgres, what\n>> profiler to use, etc? I'm running on Debian Linux x86_64.\n>>\n>> Matthew\n>>\n>> --\n>>  Jadzia: Don't forget the 34th rule of acquisition: Peace is good for business.\n>>  Quark:  That's the 35th.\n>>  Jadzia: Oh yes, that's right. What's the 34th again?\n>>  Quark:  War is good for business. It's easy to get them mixed up.\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n>  Bruce Momjian  <[email protected]>        http://momjian.us\n>  EnterpriseDB                             http://enterprisedb.com\n>  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do\n>  + If your life is a hard drive, Christ can be your backup. +\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 2 Mar 2010 11:21:15 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Thu, 25 Feb 2010, Bruce Momjian wrote:\n> Was there every any conclusion on this issue?\n\nNot really. Comments inline:\n\n> Matthew Wakeling wrote:\n>> Revisiting the thread a month back or so, I'm still investigating\n>> performance problems with GiST indexes in Postgres.\n>>\n>> Looking at http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Open_Items I'd\n>> like to clarify the contrib/seg issue. Contrib/seg is vulnerable to\n>> pathological behaviour which is fixed by my second patch, which can be\n>> viewed as complete. Contrib/cube, being multi-dimensional, is not affected\n>> to any significant degree, so should not need alteration.\n\nThis issue is addressed by my patch, which AFAIK noone has reviewed. \nHowever, that patch was derived from a patch that I applied to bioseg, \nwhich is itself a derivative of seg. This patch works very well indeed, \nand gave an approximate 100 times speed improvement in the one test I ran.\n\nSo you could say that the sister patch of the one I submitted is tried and \ntested in production.\n\n>> A second quite distinct issue is the general performance of GiST indexes\n>> which is also mentioned in the old thread linked from Open Items. For\n>> that, we have a test case at\n>> http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php for\n>> btree_gist indexes. I have a similar example with the bioseg GiST index. I\n>> have completely reimplemented the same algorithms in Java for algorithm\n>> investigation and instrumentation purposes, and it runs about a hundred\n>> times faster than in Postgres. I think this is a problem, and I'm willing\n>> to do some investigation to try and solve it.\n\nI have not made any progress on this issue. I think Oleg and Teodor would \nbe better placed working it out. All I can say is that I implemented the \nexact same indexing algorithm in Java, and it performed 100 times faster \nthan Postgres. Now, Postgres has to do a lot of additional work, like \nmapping the index onto disc, locking pages, and abstracting to plugin user \nfunctions, so I would expect some difference - I'm not sure 100 times is \nreasonable though. I tried to do some profiling, but couldn't see any one \nsection of code that was taking too much time. Not sure what I can further \ndo.\n\nMatthew\n\n-- \n Some people, when confronted with a problem, think \"I know, I'll use regular\n expressions.\" Now they have two problems. -- Jamie Zawinski\n", "msg_date": "Mon, 15 Mar 2010 15:58:51 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Mon, Mar 15, 2010 at 11:58 AM, Matthew Wakeling <[email protected]> wrote:\n> On Thu, 25 Feb 2010, Bruce Momjian wrote:\n>>\n>> Was there every any conclusion on this issue?\n>\n> Not really. Comments inline:\n>\n>> Matthew Wakeling wrote:\n>>>\n>>> Revisiting the thread a month back or so, I'm still investigating\n>>> performance problems with GiST indexes in Postgres.\n>>>\n>>> Looking at http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Open_Items I'd\n>>> like to clarify the contrib/seg issue. Contrib/seg is vulnerable to\n>>> pathological behaviour which is fixed by my second patch, which can be\n>>> viewed as complete. Contrib/cube, being multi-dimensional, is not\n>>> affected\n>>> to any significant degree, so should not need alteration.\n>\n> This issue is addressed by my patch, which AFAIK noone has reviewed.\n> However, that patch was derived from a patch that I applied to bioseg, which\n> is itself a derivative of seg. This patch works very well indeed, and gave\n> an approximate 100 times speed improvement in the one test I ran.\n>\n> So you could say that the sister patch of the one I submitted is tried and\n> tested in production.\n\nWe rely fairly heavily on the commitfest app to track which patches\nneed review; perhaps it would be good to add it here.\n\nhttps://commitfest.postgresql.org/action/commitfest_view/open\n\nI seem to recall thinking that this patch wasn't ready to apply for\nsome reason, but I'm not sure why I thought that.\n\n...Robert\n", "msg_date": "Mon, 15 Mar 2010 21:58:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Matthew Wakeling wrote:\n>> Matthew Wakeling wrote:\n>>> A second quite distinct issue is the general performance of GiST \n>>> indexes\n>>> which is also mentioned in the old thread linked from Open Items. For\n>>> that, we have a test case at\n>>> http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php \n>>> for\n>>> btree_gist indexes. I have a similar example with the bioseg GiST \n>>> index. I\n>>> have completely reimplemented the same algorithms in Java for algorithm\n>>> investigation and instrumentation purposes, and it runs about a hundred\n>>> times faster than in Postgres. I think this is a problem, and I'm \n>>> willing\n>>> to do some investigation to try and solve it.\n> I have not made any progress on this issue. I think Oleg and Teodor \n> would be better placed working it out. All I can say is that I \n> implemented the exact same indexing algorithm in Java, and it \n> performed 100 times faster than Postgres. Now, Postgres has to do a \n> lot of additional work, like mapping the index onto disc, locking \n> pages, and abstracting to plugin user functions, so I would expect \n> some difference - I'm not sure 100 times is reasonable though. I tried \n> to do some profiling, but couldn't see any one section of code that \n> was taking too much time. Not sure what I can further do.\nHello Mathew and list,\n\nA lot of time spent in gistget.c code and a lot of functioncall5's to \nthe gist's consistent function which is out of sight for gprof.\nSomething different but related since also gist: we noticed before that \ngist indexes that use a compressed form for index entries suffer from \nrepeated compress calls on query operands (see \nhttp://archives.postgresql.org/pgsql-hackers/2009-05/msg00078.php).\n\nThe btree_gist int4 compress function calls the generic \ngbt_num_compress, which does a palloc. Maybe this palloc is allso hit al \nlot when scanning the index, because the constants that are queries with \nare repeatedly compressed and palloced.\n\nregards,\nYeb Havinga\n\n\n", "msg_date": "Tue, 16 Mar 2010 17:18:19 +0100", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Yeb Havinga wrote:\n> Matthew Wakeling wrote:\n>>> Matthew Wakeling wrote:\n>>>> A second quite distinct issue is the general performance of GiST \n>>>> indexes\n>>>> which is also mentioned in the old thread linked from Open Items. For\n>>>> that, we have a test case at\n>>>> http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php \n>>>> for\n>>>> btree_gist indexes. I have a similar example with the bioseg GiST \n>>>> index. I\n>>>> have completely reimplemented the same algorithms in Java for \n>>>> algorithm\n>>>> investigation and instrumentation purposes, and it runs about a \n>>>> hundred\n>>>> times faster than in Postgres. I think this is a problem, and I'm \n>>>> willing\n>>>> to do some investigation to try and solve it.\n>> I have not made any progress on this issue. I think Oleg and Teodor \n>> would be better placed working it out. All I can say is that I \n>> implemented the exact same indexing algorithm in Java, and it \n>> performed 100 times faster than Postgres. Now, Postgres has to do a \n>> lot of additional work, like mapping the index onto disc, locking \n>> pages, and abstracting to plugin user functions, so I would expect \n>> some difference - I'm not sure 100 times is reasonable though. I \n>> tried to do some profiling, but couldn't see any one section of code \n>> that was taking too much time. Not sure what I can further do.\n> Hello Mathew and list,\n>\n> A lot of time spent in gistget.c code and a lot of functioncall5's to \n> the gist's consistent function which is out of sight for gprof.\n> Something different but related since also gist: we noticed before \n> that gist indexes that use a compressed form for index entries suffer \n> from repeated compress calls on query operands (see \n> http://archives.postgresql.org/pgsql-hackers/2009-05/msg00078.php).\n>\n> The btree_gist int4 compress function calls the generic \n> gbt_num_compress, which does a palloc. Maybe this palloc is allso hit \n> al lot when scanning the index, because the constants that are queries \n> with are repeatedly compressed and palloced.\nLooked in the code a bit more - only the index nodes are compressed at \nindex creation, the consistent functions does not compress queries, so \nnot pallocs there. However when running Mathews example from \nhttp://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php \nwith the gist index, the coverage shows in gistget.c: 1000000 palloc0 's \nof gistsearchstack at line 152 and 2010982 palloc's also of the \ngistsearchstack on line 342. Two pfrees are also hit a lot: line 195: \n1010926 of a stackentry and line 293: 200056 times. My $0.02 cents is \nthat the pain is here. My knowledge of gistget or the other sources in \naccess/gist is zero, but couldn't it be possible to determine the \nmaximum needed size of the stack and then allocate it at once and use a \npop/push kind off api?\n\nregards,\nYeb Havinga\n\n\n\n\n\n>\n> regards,\n> Yeb Havinga\n>\n>\n\n", "msg_date": "Wed, 17 Mar 2010 10:26:14 +0100", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Yeb Havinga wrote:\n> Yeb Havinga wrote:\n>> Matthew Wakeling wrote:\n>>>> Matthew Wakeling wrote:\n>>>>> A second quite distinct issue is the general performance of GiST \n>>>>> indexes\n>>>>> which is also mentioned in the old thread linked from Open Items. For\n>>>>> that, we have a test case at\n>>>>> http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php \n>>>>> for\n>>>>> btree_gist indexes. I have a similar example with the bioseg GiST \n>>>>> index. I\n>>>>> have completely reimplemented the same algorithms in Java for \n>>>>> algorithm\n>>>>> investigation and instrumentation purposes, and it runs about a \n>>>>> hundred\n>>>>> times faster than in Postgres. I think this is a problem, and I'm \n>>>>> willing\n>>>>> to do some investigation to try and solve it.\n>>> I have not made any progress on this issue. I think Oleg and Teodor \n>>> would be better placed working it out. All I can say is that I \n>>> implemented the exact same indexing algorithm in Java, and it \n>>> performed 100 times faster than Postgres. Now, Postgres has to do a \n>>> lot of additional work, like mapping the index onto disc, locking \n>>> pages, and abstracting to plugin user functions, so I would expect \n>>> some difference - I'm not sure 100 times is reasonable though. I \n>>> tried to do some profiling, but couldn't see any one section of code \n>>> that was taking too much time. Not sure what I can further do.\n> Looked in the code a bit more - only the index nodes are compressed at \n> index creation, the consistent functions does not compress queries, so \n> not pallocs there. However when running Mathews example from \n> http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php \n> with the gist index, the coverage shows in gistget.c: 1000000 palloc0 \n> 's of gistsearchstack at line 152 and 2010982 palloc's also of the \n> gistsearchstack on line 342. Two pfrees are also hit a lot: line 195: \n> 1010926 of a stackentry and line 293: 200056 times. My $0.02 cents is \n> that the pain is here. My knowledge of gistget or the other sources in \n> access/gist is zero, but couldn't it be possible to determine the \n> maximum needed size of the stack and then allocate it at once and use \n> a pop/push kind off api?\nWaisted some time today on a ghost chase... I though that removing the \nmillions of pallocs would help, so I wrote an alternative of the \ngistsearchstack-stack to find out that it was not the index scanning \nitself that caused milltions of pallocs, but the scan being in the inner \nloop that was called 1000000 times. The actual scanning time was not \nchanged significantly.\nThe actual scanning time in my vm is for btree (actual \ntime=0.006..0.008) and gist (actual time=0.071..0.074). An error in my \nsearchstack alternative caused pages to be scanned twice, returing twice \nthe amount of rows (6 instead of 3 each time). This resulted in a \nlikewise increase of ms (actual time=0.075..0.150). Somewhere I hit \nsomething that causes ~= 0.070 ms twice. For a single index scan, \n0.070ms startup time for gist vs 0.006 for btree doesn't seem like a big \nproblem, but yeah when calling it a million times...\n\nregards,\nYeb Havinga\n\n", "msg_date": "Fri, 19 Mar 2010 17:31:02 +0100", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Yeb Havinga wrote:\n>>>>> Matthew Wakeling wrote:\n>>>>>> A second quite distinct issue is the general performance of GiST \n>>>>>> indexes\n>>>>>> which is also mentioned in the old thread linked from Open Items. \n>>>>>> For\n>>>>>> that, we have a test case at\n>>>>>> http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php \n>>>>>> for\n>>>>>> btree_gist indexes. I have a similar example with the bioseg GiST \n>>>>>> index. I\n>>>>>> have completely reimplemented the same algorithms in Java for \n>>>>>> algorithm\n>>>>>> investigation and instrumentation purposes, and it runs about a \n>>>>>> hundred\n>>>>>> times faster than in Postgres. I think this is a problem, and I'm \n>>>>>> willing\n>>>>>> to do some investigation to try and solve it.\nMore gist performance..\n\nSome background: I've been involved in developing several datatypes that \nmake use of gist indexes (we just published a paper on it, see \nhttp://arxiv.org/abs/1003.3370), that's the main reason I'm very much \ninterested in possible improvements in gist index speed. One of the \ndatatypes was 'very badly' indexable: non leaf pages were getting very \ngeneral keys, so in the end we could see from the scanning times \ncompared to sequential scans that the whole index was being scanned. One \nthing I remember was the idea that somehow it would be nice if the dept \nof the gist tree could be fiddled with: in that case keys of non leaf \nnodes would be more specific again. In the original Guttman R-tree paper \nthere was mention of a parameter that determined the size of entries in \nnodes and thereby indirectly the depth of the tree. I missed that in the \nPostgreSQL gist api.\n\nOne thing Gist scanning does very often is key comparisons. So another \napproach is to try to limit those and again this might be possible by \nincreasing the depth / decrease number of entries per page. I just did a \ntest where in gistfitpage the gistpagesize was divided by 5 and \nsomething similar in gistnospace.\n\nScantime before adjustment: about 70 seconds.\nAfter adjustment: 45 seconds.\n\nWith gist_stat from the gevel package confirmed that the depth was now 4 \n(original 3). Drawback: bigger index size because pages are not filled \ncompletely anymore.\n\nThe explain shows (actual time=0.030..0.032) for the inner loop times, \nwhich is less then half of the original scan time.\n\nSince the gistpagesize is derived from the database blocksize, it might \nbe wise to set the blocksize low for this case, I'm going to play with \nthis a bit more.\n\nregards,\nYeb Havinga\n\n\n", "msg_date": "Fri, 19 Mar 2010 21:20:49 +0100", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Yeb Havinga wrote:\n>\n> Since the gistpagesize is derived from the database blocksize, it \n> might be wise to set the blocksize low for this case, I'm going to \n> play with this a bit more.\nOk, one last mail before it turns into spam: with a 1KB database \nblocksize, the query now runs in 30 seconds (original 70 on my machine, \nshared buffers 240MB).\nThe per inner loop access time now 24 microsecs compared to on my \nmachine original 74 microsecs with 8KB size and 8 for the btree scan. \nNot a bad speedup with such a simple parameter :-)\n\npostgres=# EXPLAIN ANALYSE SELECT * FROM a, b WHERE a.a BETWEEN b.b AND \nb.b + 2;\n QUERY \nPLAN \n---------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..4169159462.20 rows=111109777668 width=8) \n(actual time=0.184..29540.355 rows=2999997 loops=1)\n -> Seq Scan on b (cost=0.00..47037.62 rows=999962 width=4) (actual \ntime=0.024..1783.484 rows=1000000 loops=1)\n -> Index Scan using a_a on a (cost=0.00..2224.78 rows=111114 \nwidth=4) (actual time=0.021..0.024 rows=3 loops=1000000)\n Index Cond: ((a.a >= b.b) AND (a.a <= (b.b + 2)))\n Total runtime: 30483.303 ms\n(5 rows)\n\n\npostgres=# select gist_stat('a_a');\n gist_stat \n-------------------------------------------\n Number of levels: 5 +\n Number of pages: 47618 +\n Number of leaf pages: 45454 +\n Number of tuples: 1047617 +\n Number of invalid tuples: 0 +\n Number of leaf tuples: 1000000 +\n Total size of tuples: 21523756 bytes+\n Total size of leaf tuples: 20545448 bytes+\n Total size of index: 48760832 bytes+\n \n(1 row)\n\n", "msg_date": "Fri, 19 Mar 2010 21:49:30 +0100", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Hi Yeb,\n\nI have not looked at the gist code, but would it be possible to\nmake virtual pages that have a size that is 1/power-of-2 * blocksize.\nThen the leaf node could be 1/8 or even 1/16 the size of the full\npagesize.\n\nRegards,\nKen\n\nOn Fri, Mar 19, 2010 at 09:49:30PM +0100, Yeb Havinga wrote:\n> Yeb Havinga wrote:\n>>\n>> Since the gistpagesize is derived from the database blocksize, it might be \n>> wise to set the blocksize low for this case, I'm going to play with this a \n>> bit more.\n> Ok, one last mail before it turns into spam: with a 1KB database blocksize, \n> the query now runs in 30 seconds (original 70 on my machine, shared buffers \n> 240MB).\n> The per inner loop access time now 24 microsecs compared to on my machine \n> original 74 microsecs with 8KB size and 8 for the btree scan. Not a bad \n> speedup with such a simple parameter :-)\n>\n> postgres=# EXPLAIN ANALYSE SELECT * FROM a, b WHERE a.a BETWEEN b.b AND b.b \n> + 2;\n> QUERY PLAN \n> \n> ---------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..4169159462.20 rows=111109777668 width=8) (actual \n> time=0.184..29540.355 rows=2999997 loops=1)\n> -> Seq Scan on b (cost=0.00..47037.62 rows=999962 width=4) (actual \n> time=0.024..1783.484 rows=1000000 loops=1)\n> -> Index Scan using a_a on a (cost=0.00..2224.78 rows=111114 width=4) \n> (actual time=0.021..0.024 rows=3 loops=1000000)\n> Index Cond: ((a.a >= b.b) AND (a.a <= (b.b + 2)))\n> Total runtime: 30483.303 ms\n> (5 rows)\n>\n>\n> postgres=# select gist_stat('a_a');\n> gist_stat \n> -------------------------------------------\n> Number of levels: 5 +\n> Number of pages: 47618 +\n> Number of leaf pages: 45454 +\n> Number of tuples: 1047617 +\n> Number of invalid tuples: 0 +\n> Number of leaf tuples: 1000000 +\n> Total size of tuples: 21523756 bytes+\n> Total size of leaf tuples: 20545448 bytes+\n> Total size of index: 48760832 bytes+\n> (1 row)\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 19 Mar 2010 16:16:38 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Fri, Mar 19, 2010 at 10:16 PM, Kenneth Marshall <[email protected]> wrote:\n\n> Hi Yeb,\n>\n> I have not looked at the gist code, but would it be possible to\n> make virtual pages that have a size that is 1/power-of-2 * blocksize.\n> Then the leaf node could be 1/8 or even 1/16 the size of the full\n> pagesize.\n>\n\nHello Ken,\n\nThe gist virtual pages would then match more the original blocksizes that\nwere used in Guttman's R-tree paper (first google result, then figure 4.5).\nSince the nature/characteristics of the underlying datatypes and keys is not\nchanged, it might be that with the disk pages getting larger, gist indexing\nhas therefore become unexpectedly inefficient.\n\nBut I am also not really into the core-gist code, but do have a motivation\nto dive into it (more than 200% performance increase in Mathew's test case).\nHowever I'd like to verify for community support before working on it.\n\nMaybe Theodor or Oleg could say something about how easy or hard it is to\ndo?\n\nregards,\nYeb Havinga\n\n\n>\n> Regards,\n> Ken\n>\n> On Fri, Mar 19, 2010 at 09:49:30PM +0100, Yeb Havinga wrote:\n> > Yeb Havinga wrote:\n> >>\n> >> Since the gistpagesize is derived from the database blocksize, it might\n> be\n> >> wise to set the blocksize low for this case, I'm going to play with this\n> a\n> >> bit more.\n> > Ok, one last mail before it turns into spam: with a 1KB database\n> blocksize,\n> > the query now runs in 30 seconds (original 70 on my machine, shared\n> buffers\n> > 240MB).\n> > The per inner loop access time now 24 microsecs compared to on my machine\n> > original 74 microsecs with 8KB size and 8 for the btree scan. Not a bad\n> > speedup with such a simple parameter :-)\n> >\n> > postgres=# EXPLAIN ANALYSE SELECT * FROM a, b WHERE a.a BETWEEN b.b AND\n> b.b\n> > + 2;\n> > QUERY PLAN\n> >\n> >\n> ---------------------------------------------------------------------------------------------------------------------------\n> > Nested Loop (cost=0.00..4169159462.20 rows=111109777668 width=8) (actual\n> > time=0.184..29540.355 rows=2999997 loops=1)\n> > -> Seq Scan on b (cost=0.00..47037.62 rows=999962 width=4) (actual\n> > time=0.024..1783.484 rows=1000000 loops=1)\n> > -> Index Scan using a_a on a (cost=0.00..2224.78 rows=111114 width=4)\n> > (actual time=0.021..0.024 rows=3 loops=1000000)\n> > Index Cond: ((a.a >= b.b) AND (a.a <= (b.b + 2)))\n> > Total runtime: 30483.303 ms\n> > (5 rows)\n> >\n> >\n> > postgres=# select gist_stat('a_a');\n> > gist_stat\n> > -------------------------------------------\n> > Number of levels: 5 +\n> > Number of pages: 47618 +\n> > Number of leaf pages: 45454 +\n> > Number of tuples: 1047617 +\n> > Number of invalid tuples: 0 +\n> > Number of leaf tuples: 1000000 +\n> > Total size of tuples: 21523756 bytes+\n> > Total size of leaf tuples: 20545448 bytes+\n> > Total size of index: 48760832 bytes+\n> > (1 row)\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n>\n\nOn Fri, Mar 19, 2010 at 10:16 PM, Kenneth Marshall <[email protected]> wrote:\nHi Yeb,\n\nI have not looked at the gist code, but would it be possible to\nmake virtual pages that have a size that is 1/power-of-2 * blocksize.\nThen the leaf node could be 1/8 or even 1/16 the size of the full\npagesize.Hello Ken,The gist virtual pages would then match more the original blocksizes that were used in Guttman's R-tree paper (first google result, then figure 4.5). Since the nature/characteristics of the underlying datatypes and keys is not changed, it might be that with the disk pages getting larger, gist indexing has therefore become unexpectedly inefficient.\nBut I am also not really into the core-gist code, but do have a motivation to dive into it (more than 200% performance increase in Mathew's test case). However I'd like to verify for community support before working on it.\nMaybe Theodor or Oleg could say something about how easy or hard it is to do? regards,Yeb Havinga \n\nRegards,\nKen\n\nOn Fri, Mar 19, 2010 at 09:49:30PM +0100, Yeb Havinga wrote:\n> Yeb Havinga wrote:\n>>\n>> Since the gistpagesize is derived from the database blocksize, it might be\n>> wise to set the blocksize low for this case, I'm going to play with this a\n>> bit more.\n> Ok, one last mail before it turns into spam: with a 1KB database blocksize,\n> the query now runs in 30 seconds (original 70 on my machine, shared buffers\n> 240MB).\n> The per inner loop access time now 24 microsecs compared to on my machine\n> original 74 microsecs with 8KB size and 8 for the btree scan. Not a bad\n> speedup with such a simple parameter :-)\n>\n> postgres=# EXPLAIN ANALYSE SELECT * FROM a, b WHERE a.a BETWEEN b.b AND b.b\n> + 2;\n>                                                        QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------\n> Nested Loop  (cost=0.00..4169159462.20 rows=111109777668 width=8) (actual\n> time=0.184..29540.355 rows=2999997 loops=1)\n>   ->  Seq Scan on b  (cost=0.00..47037.62 rows=999962 width=4) (actual\n> time=0.024..1783.484 rows=1000000 loops=1)\n>   ->  Index Scan using a_a on a  (cost=0.00..2224.78 rows=111114 width=4)\n> (actual time=0.021..0.024 rows=3 loops=1000000)\n>         Index Cond: ((a.a >= b.b) AND (a.a <= (b.b + 2)))\n> Total runtime: 30483.303 ms\n> (5 rows)\n>\n>\n> postgres=# select gist_stat('a_a');\n>                 gist_stat\n> -------------------------------------------\n> Number of levels:          5             +\n> Number of pages:           47618         +\n> Number of leaf pages:      45454         +\n> Number of tuples:          1047617       +\n> Number of invalid tuples:  0             +\n> Number of leaf tuples:     1000000       +\n> Total size of tuples:      21523756 bytes+\n> Total size of leaf tuples: 20545448 bytes+\n> Total size of index:       48760832 bytes+\n> (1 row)\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>", "msg_date": "Sat, 20 Mar 2010 17:16:16 +0100", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Sat, 20 Mar 2010, Yeb Havinga wrote:\n> The gist virtual pages would then match more the original blocksizes that\n> were used in Guttman's R-tree paper (first google result, then figure 4.5).\n> Since the nature/characteristics of the underlying datatypes and keys is not\n> changed, it might be that with the disk pages getting larger, gist indexing\n> has therefore become unexpectedly inefficient.\n\nYes, that is certainly a factor. For example, the page size for bioseg \nwhich we use here is 130 entries, which is very excessive, and doesn't \nallow very deep trees. On the other hand, it means that a single disc seek \nperforms quite a lot of work.\n\n> But I am also not really into the core-gist code, but do have a motivation\n> to dive into it (more than 200% performance increase in Mathew's test case).\n> However I'd like to verify for community support before working on it.\n\nI'd also love to dive into the core gist code, but am rather daunted by \nit. I believe that there is something there that is taking more time than \nI can account for. The indexing algorithm itself is good.\n\nMatthew\n\n-- \n \"The problem with defending the purity of the English language is that\n English is about as pure as a cribhouse whore. We don't just borrow words;\n on occasion, English has pursued other languages down alleyways to beat\n them unconscious and rifle their pockets for new vocabulary.\" - James Nicoll\n", "msg_date": "Mon, 22 Mar 2010 13:29:49 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Sat, 20 Mar 2010, Yeb Havinga wrote:\n>> The gist virtual pages would then match more the original blocksizes \n>> that\n>> were used in Guttman's R-tree paper (first google result, then figure \n>> 4.5).\n>> Since the nature/characteristics of the underlying datatypes and keys \n>> is not\n>> changed, it might be that with the disk pages getting larger, gist \n>> indexing\n>> has therefore become unexpectedly inefficient.\n>\n> Yes, that is certainly a factor. For example, the page size for bioseg \n> which we use here is 130 entries, which is very excessive, and doesn't \n> allow very deep trees. On the other hand, it means that a single disc \n> seek performs quite a lot of work.\nYeah, I only did in-memory fitting tests and wondered about increased \nio's. However I bet that even for bigger than ram db's, the benefit of \nhaving to fan out to less pages still outweighs the over-general non \nleaf nodes and might still result in less disk io's. I redid some \nearlier benchmarking with other datatypes with a 1kB block size and also \nmulticolumn gist and the multicolumn variant had an ever greater benefit \nthan the single column indexes, both equality and range scans. (Like \nexecution times down to 20% of original). If gist is important to you, I \nreally recommend doing a test with 1kB blocks.\n\nregards,\nYeb Havinga\n", "msg_date": "Mon, 22 Mar 2010 15:02:30 +0100", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Mon, 22 Mar 2010, Yeb Havinga wrote:\n>> Yes, that is certainly a factor. For example, the page size for bioseg \n>> which we use here is 130 entries, which is very excessive, and doesn't \n>> allow very deep trees. On the other hand, it means that a single disc seek \n>> performs quite a lot of work.\n\n> Yeah, I only did in-memory fitting tests and wondered about increased io's. \n> However I bet that even for bigger than ram db's, the benefit of having to \n> fan out to less pages still outweighs the over-general non leaf nodes and \n> might still result in less disk io's. I redid some earlier benchmarking with \n> other datatypes with a 1kB block size and also multicolumn gist and the \n> multicolumn variant had an ever greater benefit than the single column \n> indexes, both equality and range scans. (Like execution times down to 20% of \n> original). If gist is important to you, I really recommend doing a test with \n> 1kB blocks.\n\nPurely from a disc seek count point of view, assuming an infinite CPU \nspeed and infinite disc transfer rate, the larger the index pages the \nbetter. The number of seeks per fetch will be equivalent to the depth of \nthe tree.\n\nIf you take disc transfer rate into account, the break-even point is when \nyou spend an equal time transferring as seeking, which places the page \nsize around 500kB on a modern disc, assuming RAID stripe alignment doesn't \nmake that into two seeks instead of one.\n\nHowever, for efficient CPU usage, the ideal page size for a tree index is \nmuch smaller - between two and ten entries, depending on the type of the \ndata.\n\nThere may be some mileage in reorganising indexes into a two-level system. \nThat is, have an index format where the page size is 512kB or similar, but \neach page is internally a CPU-efficient tree itself.\n\nHowever, this is beyond the scope of the problem of speeding up gist.\n\nMatthew\n\n-- \n If you let your happiness depend upon how somebody else feels about you,\n now you have to control how somebody else feels about you. -- Abraham Hicks\n", "msg_date": "Mon, 22 Mar 2010 14:23:50 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" } ]
[ { "msg_contents": "On a 8 processor system, my stats collector is always at 100% CPU. \nMeanwhile disk I/O is very low. We have many databases, they are \naccessed frequently. Sometimes there are big table updates, but in most \nof the time only simple queries are ran against the databases, returning \na few records only. From the maximum possible 8.0 system load, the \naverage load is always above 1.1 and from this, 1.0 is the stats \ncollector and 0.1 is the remaining of the system. If I restart the \npostgresql server, then the stats collector uses 0% CPU for about 10 \nminutes, then goes up to 100% again. Is there a way to tell why it is \nworking so much?\n\nI asked this problem some months ago on a different mailing list. I was \nasked to provide tracebacks of the stats collector, but due to a bug in \nthe FreeBSD ppid() function, I'm not able to trace the stats collector.\n\nThank you,\n\n Laszlo\n\n", "msg_date": "Fri, 05 Jun 2009 11:58:40 +0200", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Why is my stats collector so busy?" }, { "msg_contents": "Laszlo Nagy wrote:\n> On a 8 processor system, my stats collector is always at 100% CPU. \n> Meanwhile disk I/O is very low. We have many databases, they are \n> accessed frequently. Sometimes there are big table updates, but in most \n> of the time only simple queries are ran against the databases, returning \n> a few records only. From the maximum possible 8.0 system load, the \n> average load is always above 1.1 and from this, 1.0 is the stats \n> collector and 0.1 is the remaining of the system. If I restart the \n> postgresql server, then the stats collector uses 0% CPU for about 10 \n> minutes, then goes up to 100% again. Is there a way to tell why it is \n> working so much?\n> \n> I asked this problem some months ago on a different mailing list. I was \n> asked to provide tracebacks of the stats collector, but due to a bug in \n> the FreeBSD ppid() function, I'm not able to trace the stats collector.\n\nWhat version of Postgres are you using?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 5 Jun 2009 09:38:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is my stats collector so busy?" }, { "msg_contents": "Laszlo Nagy <[email protected]> writes:\n> On a 8 processor system, my stats collector is always at 100% CPU. \n\nWhat platform? What Postgres version?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Jun 2009 09:47:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is my stats collector so busy? " }, { "msg_contents": "On Fri, Jun 5, 2009 at 9:38 AM, Bruce Momjian<[email protected]> wrote:\n> Laszlo Nagy wrote:\n>> On a 8 processor system, my stats collector is always at 100% CPU.\n>> Meanwhile disk I/O is very low. We have many databases, they are\n>> accessed frequently. Sometimes there are big table updates, but in most\n>> of the time only simple queries are ran against the databases, returning\n>> a few records only. From the maximum possible 8.0 system load, the\n>> average load is always above 1.1 and from this, 1.0 is the stats\n>> collector and 0.1 is the remaining of the system. If I restart the\n>> postgresql server, then the stats collector uses 0% CPU for about 10\n>> minutes, then goes up to 100% again. Is there a way to tell why it is\n>> working so much?\n>>\n>> I asked this problem some months ago on a different mailing list. I was\n>> asked to provide tracebacks of the stats collector, but due to a bug in\n>> the FreeBSD ppid() function, I'm not able to trace the stats collector.\n>\n> What version of Postgres are you using?\n\nA little context here. The stats collector is really version\ndependent...it gets tweaked just about every version of postgres...it\nis more or less unrecognizable since the 8.0 version of postgresql,\nwhere I would simply turn it off and run analyze myself.\n\nBe prepared for the advice to consider upgrading to help deal with\nthis issue. 8.4 in fact has some enhancements that will help with\nsituations like this.\n\nmerlin\n", "msg_date": "Fri, 5 Jun 2009 09:52:02 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is my stats collector so busy?" }, { "msg_contents": "\n> On Fri, Jun 5, 2009 at 9:38 AM, Bruce Momjian<[email protected]> wrote:\n> \n>> Laszlo Nagy wrote:\n>> \n>>> On a 8 processor system, my stats collector is always at 100% CPU.\n>>> Meanwhile disk I/O is very low. We have many databases, they are\n>>> accessed frequently. Sometimes there are big table updates, but in most\n>>> of the time only simple queries are ran against the databases, returning\n>>> a few records only. From the maximum possible 8.0 system load, the\n>>> average load is always above 1.1 and from this, 1.0 is the stats\n>>> collector and 0.1 is the remaining of the system. If I restart the\n>>> postgresql server, then the stats collector uses 0% CPU for about 10\n>>> minutes, then goes up to 100% again. Is there a way to tell why it is\n>>> working so much?\n>>>\n>>> I asked this problem some months ago on a different mailing list. I was\n>>> asked to provide tracebacks of the stats collector, but due to a bug in\n>>> the FreeBSD ppid() function, I'm not able to trace the stats collector\n>>> \nI've been having the same problem for several months now. I posted \nsomething to the novice list back in January but it really never went \nanywhere so I dropped it. Formalities... v8.3.7 build 1400, Windows XP \n64-bit, two Opteron 2218. This is my personal database. It runs a \nsingle database locally on my box and I'm the only person that ever \naccesses it.\n\n From a fresh start of the server I get one postgres process that will \nrun 100% of a CPU with no I/O essentially forever. If I use Process \nExplorer to identify the process and attach the debugger it will \nterminate and then restart with another process id. When I saw the \nprevious post I looked at the process a bit closer and below is what is \nlisted from Process Explorer for the problem process:\n\n\\BaseNamedObjects\\pgident(3432): postgres: stats collector process\n\nWhat I have resorted to is just suspending this process so it's not \nwasting one of my CPUs and everything seems to be working fine. I \nrealize this is just a bandage but it works for me. I'm just a novice \nso if anyone has suggestions on what I can do to provide more \ninformation to try and track this down I'd appreciate it. I figured it \nwas just something I had screwed up but now that someone else is seeing \nthe same problem I know it's not just my problem.\n\nBob\n\n", "msg_date": "Fri, 05 Jun 2009 14:01:40 -0500", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is my stats collector so busy?" }, { "msg_contents": "\n>\n> What version of Postgres are you using?\n>\n> \n\n8.3.5 on FreeBSD amd64\n\n", "msg_date": "Mon, 08 Jun 2009 13:09:48 +0200", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is my stats collector so busy?" }, { "msg_contents": "Tom Lane wrote:\n> Laszlo Nagy <[email protected]> writes:\n> \n>> On a 8 processor system, my stats collector is always at 100% CPU. \n>> \n>\n> What platform? What Postgres version?\n>\n> \t\t\tregards, tom lane\n>\n> \n8.3.5 on FreeBSD 7.0 amd64\n", "msg_date": "Wed, 10 Jun 2009 11:10:28 +0200", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is my stats collector so busy?" }, { "msg_contents": "Laszlo Nagy <[email protected]> writes:\n> On a 8 processor system, my stats collector is always at 100% CPU. \n> 8.3.5 on FreeBSD 7.0 amd64\n\nHmm. How many tables in the installation? Or perhaps more to the\npoint, how large is $PGDATA/global/pgstat.stat? It might be that\nit's just spending too much time dumping out that file. There's\na fix for that planned for 8.4 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jun 2009 13:07:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is my stats collector so busy? " } ]
[ { "msg_contents": "Found a notice after completing the\n vacuumdb -p 5433 -- all --analyze --full -v\nmax_fsm_relation = 1400 in postgresql.conf\nThou all our 50 db individually have less then 1400 relation , when it\ncompletes , there was NOTICE that increase the max_fsm_relation.\nINFO: free space map contains 10344 pages in 1400 relations\nDETAIL: A total of 25000 page slots are in use (including overhead).\n54304 page slots are required to track all free space.\nCurrent limits are: 25000 page slots, 1400 relations, using 299 KB.\nNOTICE: max_fsm_relations(1400) equals the number of relations checked\nHINT: You have at least 1400 relations. Consider increasing the\nconfiguration parameter \"max_fsm_relations\".\nVACUUM\n\nBut there nearly only 300 tables in that db. Is the free space map is per\nDB or for all DB. Can i know the reason of this problem?\n\n\n-Arvind S\n\n*\n\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison*\n\nFound a notice after completing the  vacuumdb -p 5433  -- all --analyze --full -v max_fsm_relation = 1400 in postgresql.confThou all our 50 db individually have less then 1400 relation , when it completes , there was NOTICE that increase the max_fsm_relation. \n\nINFO:  free space map contains 10344 pages in 1400 relationsDETAIL:  A total of 25000 page slots are in use (including overhead).54304 page slots are required to track all free space.Current limits are:  25000 page slots, 1400 relations, using 299 KB.\n\nNOTICE:  max_fsm_relations(1400) equals the number of relations checkedHINT:  You have at least 1400 relations.  Consider increasing the configuration parameter \"max_fsm_relations\".VACUUMBut there nearly only 300 tables in that db. Is the  free space map is per DB or for all DB. Can i know the reason of this problem?\n-Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"-Thomas Edison", "msg_date": "Sun, 7 Jun 2009 04:06:13 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum ALL FULL" }, { "msg_contents": "should i have to increase max_fsm_relations more. If yes why i have to ?\nSince number of relation is less only.\n\n--Arvind S\n\n\n\nOn Sun, Jun 7, 2009 at 4:06 AM, S Arvind <[email protected]> wrote:\n\n> Found a notice after completing the\n> vacuumdb -p 5433 -- all --analyze --full -v\n> max_fsm_relation = 1400 in postgresql.conf\n> Thou all our 50 db individually have less then 1400 relation , when it\n> completes , there was NOTICE that increase the max_fsm_relation.\n> INFO: free space map contains 10344 pages in 1400 relations\n> DETAIL: A total of 25000 page slots are in use (including overhead).\n> 54304 page slots are required to track all free space.\n> Current limits are: 25000 page slots, 1400 relations, using 299 KB.\n> NOTICE: max_fsm_relations(1400) equals the number of relations checked\n> HINT: You have at least 1400 relations. Consider increasing the\n> configuration parameter \"max_fsm_relations\".\n> VACUUM\n>\n> But there nearly only 300 tables in that db. Is the free space map is per\n> DB or for all DB. Can i know the reason of this problem?\n>\n>\n> -Arvind S\n>\n> *\n>\n> \"Many of lifes failure are people who did not realize how close they were\n> to success when they gave up.\"\n> -Thomas Edison*\n>\n\nshould i have to increase max_fsm_relations more. If yes why i have to ? Since number of relation is less only.--Arvind SOn Sun, Jun 7, 2009 at 4:06 AM, S Arvind <[email protected]> wrote:\nFound a notice after completing the  vacuumdb -p 5433  -- all --analyze --full -v max_fsm_relation = 1400 in postgresql.conf\n\nThou all our 50 db individually have less then 1400 relation , when it completes , there was NOTICE that increase the max_fsm_relation. \nINFO:  free space map contains 10344 pages in 1400 relationsDETAIL:  A total of 25000 page slots are in use (including overhead).54304 page slots are required to track all free space.Current limits are:  25000 page slots, 1400 relations, using 299 KB.\n\n\nNOTICE:  max_fsm_relations(1400) equals the number of relations checkedHINT:  You have at least 1400 relations.  Consider increasing the configuration parameter \"max_fsm_relations\".VACUUMBut there nearly only 300 tables in that db. Is the  free space map is per DB or for all DB. Can i know the reason of this problem?\n-Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"-Thomas Edison", "msg_date": "Sun, 7 Jun 2009 04:10:03 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ALL FULL" }, { "msg_contents": "S Arvind <[email protected]> writes:\n> But there nearly only 300 tables in that db. Is the free space map is per\n> DB or for all DB. Can i know the reason of this problem?\n\nIt's across all DBs in the installation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Jun 2009 18:45:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ALL FULL " }, { "msg_contents": "Thanks Tom,\nSo do i have to increase the max_fsm_relation based on (Average_no_relation\nper db * number of db)? if so it will be very high since in our one db\nserver we have 200 db with average 800 tables in each db. What is the value\nwe have to give for this kind of server?\n\n-Arvind S\n\n\n\nOn Sun, Jun 7, 2009 at 4:15 AM, Tom Lane <[email protected]> wrote:\n\n> S Arvind <[email protected]> writes:\n> > But there nearly only 300 tables in that db. Is the free space map is\n> per\n> > DB or for all DB. Can i know the reason of this problem?\n>\n> It's across all DBs in the installation.\n>\n> regards, tom lane\n>\n\nThanks Tom, So do i have to increase the max_fsm_relation based on (Average_no_relation per db * number of db)? if so it will be very high since in our one db server we have 200 db with average 800 tables in each db. What is the value we have to give for this kind of server?\n-Arvind SOn Sun, Jun 7, 2009 at 4:15 AM, Tom Lane <[email protected]> wrote:\nS Arvind <[email protected]> writes:\n> But there nearly only 300 tables in that db. Is the  free space map is per\n> DB or for all DB. Can i know the reason of this problem?\n\nIt's across all DBs in the installation.\n\n                        regards, tom lane", "msg_date": "Sun, 7 Jun 2009 04:24:29 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ALL FULL" }, { "msg_contents": "So is it no use running\nvacuumdb --all --analyze --full\nas fsm map is full?\n\n-Arvind S\n\n\nOn Sun, Jun 7, 2009 at 4:24 AM, S Arvind <[email protected]> wrote:\n\n> Thanks Tom,\n> So do i have to increase the max_fsm_relation based on (Average_no_relation\n> per db * number of db)? if so it will be very high since in our one db\n> server we have 200 db with average 800 tables in each db. What is the value\n> we have to give for this kind of server?\n>\n> -Arvind S\n>\n>\n>\n>\n> On Sun, Jun 7, 2009 at 4:15 AM, Tom Lane <[email protected]> wrote:\n>\n>> S Arvind <[email protected]> writes:\n>> > But there nearly only 300 tables in that db. Is the free space map is\n>> per\n>> > DB or for all DB. Can i know the reason of this problem?\n>>\n>> It's across all DBs in the installation.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nSo is it no use running vacuumdb --all --analyze --fullas fsm map is full? -Arvind S\nOn Sun, Jun 7, 2009 at 4:24 AM, S Arvind <[email protected]> wrote:\n\nThanks Tom, So do i have to increase the max_fsm_relation based on (Average_no_relation per db * number of db)? if so it will be very high since in our one db server we have 200 db with average 800 tables in each db. What is the value we have to give for this kind of server?\n\n-Arvind SOn Sun, Jun 7, 2009 at 4:15 AM, Tom Lane <[email protected]> wrote:\n\nS Arvind <[email protected]> writes:\n> But there nearly only 300 tables in that db. Is the  free space map is per\n> DB or for all DB. Can i know the reason of this problem?\n\nIt's across all DBs in the installation.\n\n                        regards, tom lane", "msg_date": "Sun, 7 Jun 2009 04:29:53 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ALL FULL" }, { "msg_contents": "S Arvind <[email protected]> writes:\n> So is it no use running\n> vacuumdb --all --analyze --full\n> as fsm map is full?\n\nWell, it's not of *no* use. But you'd be well advised to crank up the\nFSM size.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Jun 2009 19:02:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ALL FULL " }, { "msg_contents": "Sorry Tom, i cant able to understand. Should i have to increse the\nmax_fsm_rel based on formula and re-run the vacuum command? The main reason\nfor vacuum for us is to increase performance of our db. Please tell value\nfor our kind of server(as provided in previous mail) ?\n\n-- Arvind S\n\nOn Sun, Jun 7, 2009 at 4:32 AM, Tom Lane <[email protected]> wrote:\n\n> S Arvind <[email protected]> writes:\n> > So is it no use running\n> > vacuumdb --all --analyze --full\n> > as fsm map is full?\n>\n> Well, it's not of *no* use. But you'd be well advised to crank up the\n> FSM size.\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSorry Tom, i cant able to understand. Should i have to increse the max_fsm_rel based on formula and re-run the vacuum command? The main reason for vacuum for us is to increase performance of our db. Please tell value for our kind of server(as provided in previous mail) ?\n-- Arvind SOn Sun, Jun 7, 2009 at 4:32 AM, Tom Lane <[email protected]> wrote:\nS Arvind <[email protected]> writes:\n> So is it no use running\n> vacuumdb --all --analyze --full\n> as fsm map is full?\n\nWell, it's not of *no* use.  But you'd be well advised to crank up the\nFSM size.\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 7 Jun 2009 04:39:40 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ALL FULL" }, { "msg_contents": "S Arvind <[email protected]> writes:\n> So do i have to increase the max_fsm_relation based on (Average_no_relation\n> per db * number of db)? if so it will be very high since in our one db\n> server we have 200 db with average 800 tables in each db. What is the value\n> we have to give for this kind of server?\n\nAbout 160000.\n\nOne wonders whether you shouldn't rethink your schema design. Large\nnumbers of small tables usually are not a good use of SQL. (I assume\nthey're small, else you'd have had serious bloat problems already from\nyour undersized max_fsm_pages setting ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Jun 2009 19:12:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ALL FULL " }, { "msg_contents": "Thanks Tom Lane,\n I think we must have to consider about your last mail words. But now\nreducing the table is mearly impossible, but very thanks for advice , we\nwill try it in future.\n\n-Arvind S\n\n\n\nOn Sun, Jun 7, 2009 at 4:42 AM, Tom Lane <[email protected]> wrote:\n\n> S Arvind <[email protected]> writes:\n> > So do i have to increase the max_fsm_relation based on\n> (Average_no_relation\n> > per db * number of db)? if so it will be very high since in our one db\n> > server we have 200 db with average 800 tables in each db. What is the\n> value\n> > we have to give for this kind of server?\n>\n> About 160000.\n>\n> One wonders whether you shouldn't rethink your schema design. Large\n> numbers of small tables usually are not a good use of SQL. (I assume\n> they're small, else you'd have had serious bloat problems already from\n> your undersized max_fsm_pages setting ...)\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks Tom Lane,   I think we must have to consider about your last mail words. But now reducing the table is mearly impossible, but very thanks for advice , we will try it in future.-Arvind S\nOn Sun, Jun 7, 2009 at 4:42 AM, Tom Lane <[email protected]> wrote:\nS Arvind <[email protected]> writes:\n> So do i have to increase the max_fsm_relation based on (Average_no_relation\n> per db * number of db)? if so it will be very high since in our one db\n> server we have 200 db with average 800 tables in each db. What is the value\n> we have to give for this kind of server?\n\nAbout 160000.\n\nOne wonders whether you shouldn't rethink your schema design.  Large\nnumbers of small tables usually are not a good use of SQL.  (I assume\nthey're small, else you'd have had serious bloat problems already from\nyour undersized max_fsm_pages setting ...)\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 7 Jun 2009 04:58:49 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ALL FULL" } ]
[ { "msg_contents": "In single server(4gb ram 2 core 2 duo), which design is good for\nperformance?\n1) single installation for 200 db with 800 tables/db\nor\n2) Two installation (instance) running on different port with each\nhandling 100 db.\n\nWhich of this design is good for postgres where our goal is high\nperformance.\n\n-Arvind S\n\n*\n\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison*\n\nIn single server(4gb ram 2 core 2 duo),  which design is good for performance?1)     single installation for 200 db with 800 tables/dbor2)     Two installation (instance) running on different port with each handling 100 db.\nWhich of this design is good for postgres where our goal is high performance.-Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"\n\n-Thomas Edison", "msg_date": "Sun, 7 Jun 2009 05:11:44 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres installation for Performance" } ]
[ { "msg_contents": "Hello all,\n\nWe're implementing a fairly large J2EE application, I'm estimating around \n450,000 concurrent users at high peak. Performing reads and writes and\nwe have a very high performance requirement.\n\nI'll be using connection pooling (probably the pooling delivered with \nGeronimo).\n\nI'd like to get an idea of \"how big can I go\". without running into context\nswitch storms, or hiting some other wall. \n\nThe design actually calls for multiple databases but I'm trying to get a \ngood idea of the max size / database. (I.e., don't want 50+ database servers \nif i can avoid it)\n\nWe'll be on 8.4 (or 8.5) by the time we go live and SLES linux (for now).\nI don't have hardware yet, basically, we'll purchase enough hardware to\nhandle whatever we need... \n\nIs anyone willing to share their max connections and maybe some rough \nhardware sizing (cpu/mem?).\n\nThanks\n\nDave\n", "msg_date": "Tue, 9 Jun 2009 16:12:55 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for installations with a large number of concurrent users" }, { "msg_contents": "David Kerr <[email protected]> wrote:\n \n> We'll be on 8.4 (or 8.5) by the time we go live and SLES linux (for\n> now). I don't have hardware yet, basically, we'll purchase enough\n> hardware to handle whatever we need... \n> \n> Is anyone willing to share their max connections and maybe some\n> rough hardware sizing (cpu/mem?).\n \nWe're on SLES 10 SP 2 and are handling a web site which gets two to\nthree million hits per day, running tens of millions of queries, while\nfunctioning as a replication target receiving about one million\ndatabase transactions to modify data, averaging about 10 DML\nstatements each, on one box with the following hardware:\n \n16 Xeon X7350 @ 2.93GHz\n128 GB RAM\n36 drives in RAID 5 for data for the above\n2 mirrored drives for xlog\n2 mirrored drives for the OS\n12 drives in RAID 5 for another database (less active)\na decent battery backed RAID controller, using write-back\n \nThis server also runs our Java middle tiers for accessing the database\non the box (using a home-grown framework).\n \nWe need to run three multiprocessor blades running Tomcat to handle\nthe rendering for the web application. The renderers tend to saturate\nbefore this server.\n \nThis all runs very comfortably on the one box, although we have\nmultiples (in different buildings) kept up-to-date on replication,\nto ensure high availability.\n \nThe connection pool for the web application is maxed at 25 active\nconnections; the replication at 6. We were using higher values, but\nfound that shrinking the connection pool down to this improved\nthroughput (in a saturation test) and response time (in production).\nIf the server were dedicated to PostgreSQL only, more connections\nwould probably be optimal.\n \nI worry a little when you mention J2EE. EJBs were notoriously poor\nperformers, although I hear there have been improvements. Just be\ncareful to pinpoint your bottlenecks so you can address the real\nproblem if there is a performance issue.\n \n-Kevin\n", "msg_date": "Wed, 10 Jun 2009 11:40:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for installations with a large number of concurrent users" }, { "msg_contents": "On Wed, Jun 10, 2009 at 11:40:21AM -0500, Kevin Grittner wrote:\n- We're on SLES 10 SP 2 and are handling a web site which gets two to\n- three million hits per day, running tens of millions of queries, while\n- functioning as a replication target receiving about one million\n- database transactions to modify data, averaging about 10 DML\n- statements each, on one box with the following hardware:\n\n[snip]\n\nThanks! that's all great info, puts me much more at ease.\n\n- The connection pool for the web application is maxed at 25 active\n- connections; the replication at 6. We were using higher values, but\n- found that shrinking the connection pool down to this improved\n- throughput (in a saturation test) and response time (in production).\n- If the server were dedicated to PostgreSQL only, more connections\n- would probably be optimal.\n\nOk, so it looks ilike I need to focus some testing there to find the\noptimal for my setup. I was thinking 25 for starters, but I think i'll \nbump that to 50.\n\n- I worry a little when you mention J2EE. EJBs were notoriously poor\n- performers, although I hear there have been improvements. Just be\n- careful to pinpoint your bottlenecks so you can address the real\n- problem if there is a performance issue.\n\nSounds good, thanks for the heads up.\n\nDave\n", "msg_date": "Wed, 10 Jun 2009 12:57:42 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for installations with a large number of concurrent users" } ]
[ { "msg_contents": "\nIt appears that I am being censored. I have tried three times to send a \nparticular message to this list over the last few days, while a different \nmail has gone through fine. There does not appear to be a publicised list \nmanager address, so I am addressing this complaint to the whole list. Is \nthere someone here who can fix the problem?\n\nMatthew\n\n-- \n Psychotics are consistently inconsistent. The essence of sanity is to\n be inconsistently inconsistent.\n", "msg_date": "Wed, 10 Jun 2009 14:12:07 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Censorship" }, { "msg_contents": "Matthew Wakeling <matthew 'at' flymine.org> writes:\n\n> It appears that I am being censored.\n\nDo you seriously think that censorman would kill your previous\nmails, but would let a \"It appears that I am being censored\" mail\ngo through?\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Wed, 10 Jun 2009 15:21:03 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Censorship" }, { "msg_contents": "On Wed, 10 Jun 2009, Guillaume Cottenceau wrote:\n> Matthew Wakeling <matthew 'at' flymine.org> writes:\n>\n>> It appears that I am being censored.\n>\n> Do you seriously think that censorman would kill your previous\n> mails, but would let a \"It appears that I am being censored\" mail\n> go through?\n\nIf it's an automatic program picking up on some phrase in my previous \nemail, yes. Unfortunately, I have no idea what might have set it off.\n\nMatthew\n\n-- \n An optimist sees the glass as half full, a pessimist as half empty,\n and an engineer as having redundant storage capacity.\n", "msg_date": "Wed, 10 Jun 2009 14:24:23 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Censorship" }, { "msg_contents": "There is a limit on the size of the mail that you can send to different\nmailing lists. Please try to remove/link your attachments if you are trying\nto send any.\n\nBest regards,\n\nOn Wed, Jun 10, 2009 at 6:42 PM, Matthew Wakeling <[email protected]>wrote:\n\n>\n> It appears that I am being censored. I have tried three times to send a\n> particular message to this list over the last few days, while a different\n> mail has gone through fine. There does not appear to be a publicised list\n> manager address, so I am addressing this complaint to the whole list. Is\n> there someone here who can fix the problem?\n>\n> Matthew\n>\n> --\n> Psychotics are consistently inconsistent. The essence of sanity is to\n> be inconsistently inconsistent.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nLets call it Postgres\n\nEnterpriseDB http://www.enterprisedb.com\n\ngurjeet[.singh]@EnterpriseDB.com\nsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com\nMail sent from my BlackLaptop device\n\nThere is a limit on the size of the mail that you can send to different mailing lists. Please try to remove/link your attachments if you are trying to send any.Best regards,\n\nOn Wed, Jun 10, 2009 at 6:42 PM, Matthew Wakeling <[email protected]> wrote:\n\nIt appears that I am being censored. I have tried three times to send a particular message to this list over the last few days, while a different mail has gone through fine. There does not appear to be a publicised list manager address, so I am addressing this complaint to the whole list. Is there someone here who can fix the problem?\n\nMatthew\n\n-- \nPsychotics are consistently inconsistent. The essence of sanity is to\nbe inconsistently inconsistent.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Lets call it PostgresEnterpriseDB      http://www.enterprisedb.comgurjeet[.singh]@EnterpriseDB.comsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com\n\nMail sent from my BlackLaptop device", "msg_date": "Wed, 10 Jun 2009 18:55:57 +0530", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Censorship" }, { "msg_contents": "On Wed, 10 Jun 2009, Gurjeet Singh wrote:\n> There is a limit on the size of the mail that you can send to different mailing lists. Please try to remove/link your\n> attachments if you are trying to send any.\n\nNo, size is not an issue - it's only 3kB.\n\nMatthew\n\n-- \n Q: What's the difference between ignorance and apathy?\n A: I don't know, and I don't care.\n", "msg_date": "Wed, 10 Jun 2009 14:39:03 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Censorship" }, { "msg_contents": "On Wed, Jun 10, 2009 at 9:39 AM, Matthew Wakeling <[email protected]>wrote:\n\n> On Wed, 10 Jun 2009, Gurjeet Singh wrote:\n>\n>> There is a limit on the size of the mail that you can send to different\n>> mailing lists. Please try to remove/link your\n>> attachments if you are trying to send any.\n>>\n>\n> No, size is not an issue - it's only 3kB.\n>\n\nAre you getting a bounce message? They usually have the reason in there.\n\n--Scott\n\n\n>\n> Matthew\n>\n> --\n> Q: What's the difference between ignorance and apathy?\n> A: I don't know, and I don't care.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Wed, Jun 10, 2009 at 9:39 AM, Matthew Wakeling <[email protected]> wrote:\nOn Wed, 10 Jun 2009, Gurjeet Singh wrote:\n\nThere is a limit on the size of the mail that you can send to different mailing lists. Please try to remove/link your\nattachments if you are trying to send any.\n\n\nNo, size is not an issue - it's only 3kB.Are you getting a bounce message?  They usually have the reason in there.--Scott   \n\nMatthew\n\n-- \nQ: What's the difference between ignorance and apathy?\nA: I don't know, and I don't care.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 10 Jun 2009 09:41:18 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Censorship" }, { "msg_contents": "On Wed, 10 Jun 2009, Scott Mead wrote:\n> Are you getting a bounce message?  They usually have the reason in there.\n\nNo, I am not getting any bounce message. My email just goes into a black \nhole, and does not appear on the web site archives either.\n\nMatthew\n\n-- \n The only secure computer is one that's unplugged, locked in a safe,\n and buried 20 feet under the ground in a secret location...and i'm not\n even too sure about that one. --Dennis Huges, FBI", "msg_date": "Wed, 10 Jun 2009 14:48:14 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Censorship" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Wed, 10 Jun 2009, Scott Mead wrote:\n>> Are you getting a bounce message? �They usually have the reason in there.\n\n> No, I am not getting any bounce message.\n\nIIRC, getting a bounce message for rejected or held-for-moderation\nsubmissions is a configurable subscription option, and for some\ninexplicable reason it defaults to off. Get a \"help set\" message from\nthe listserv and look through your options.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jun 2009 10:02:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Censorship " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMatthew Wakeling wrote:\n> \n> It appears that I am being censored. I have tried three times to send a\n> particular message to this list over the last few days, while a\n> different mail has gone through fine. There does not appear to be a\n> publicised list manager address, so I am addressing this complaint to\n> the whole list. Is there someone here who can fix the problem?\n\nThis one seems to have made it.\n\nRest assured, nobody is interested enough to censor anything here.\n\n- --\nDan Langille\n\nBSDCan - The Technical BSD Conference : http://www.bsdcan.org/\nPGCon - The PostgreSQL Conference: http://www.pgcon.org/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.11 (FreeBSD)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niEYEARECAAYFAko0KfsACgkQCgsXFM/7nTxc8QCgolfbFTkK1ZqtJN0XzWNghL5X\nY+YAnjvyNdhaV1LDfrALXd66CdjY8j+y\n=rxzf\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 13 Jun 2009 18:36:43 -0400", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Censorship" } ]
[ { "msg_contents": "\nAlright. I have finally worked out why I was being censored. Majordomo \ndoesn't like subject lines beginning with the word \"help\". It by default \nsends your message off to the moderators and doesn't tell you. Now follows \nmy original mail:\n\nHi. I thought by now I would be fairly good at understanding EXPLAIN \nANALYSE results, but I can't quite figure this one out. Perhaps someone \ncould help me.\n\nEXPLAIN ANALYSE SELECT *\nFROM GeneGoAnnotation a1, GOAnnotation a2, OntologyTermRelations a3\nWHERE a1.GoAnnotation = a2.id AND a2.ontologyTermId = a3.OntologyTerm;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..673587.67 rows=330437962 width=95)\n (actual time=0.056..1924645.797 rows=344491124 loops=1)\n -> Merge Join (cost=0.00..28369.58 rows=361427 width=87)\n (actual time=0.039..4620.912 rows=361427 loops=1)\n Merge Cond: (a1.goannotation = a2.id)\n -> Index Scan using genegoannotation__goannotation on genegoannotation a1\n (cost=0.00..9710.32 rows=361427 width=8)\n (actual time=0.015..840.547 rows=361427 loops=1)\n -> Index Scan using goannotation_pkey on goannotation a2\n (cost=0.00..13133.12 rows=403323 width=79)\n (actual time=0.014..1427.179 rows=403323 loops=1)\n -> Index Scan using ontologytermrelations__ontologyterm on ontologytermrelations a3\n (cost=0.00..1.20 rows=47 width=8)\n (actual time=0.022..1.908 rows=953 loops=361427)\n Index Cond: (a3.ontologyterm = a2.ontologytermid)\n Total runtime: 2524647.064 ms\n(8 rows)\n\nIf I look at the actual results of the outer-most join, the nested loop, \nthen I can take the number rows=344491124 and divide it by loops=361427 to \nget rows=953. Clearly this means that on average each index scan on a3 \nreturned 953 rows.\n\nHowever, if I apply the same logic to the estimated results, it all falls \napart. The total estimated number of rows is remarkably accurate, as is \nthe estimated number of loops (results from the merge join). However the \naverage number of rows expected to be returned from the index scan is only \n47. I don't know how the planner is getting its accurate final estimate of \nrows=330437962, because it is not from multiplying rows=361427 by rows=47. \nThat would only give 16987069 rows.\n\nAny ideas/explanations?\n\nMatthew\n\n-- \n To be or not to be -- Shakespeare\n To do is to be -- Nietzsche\n To be is to do -- Sartre\n Do be do be do -- Sinatra\n", "msg_date": "Wed, 10 Jun 2009 15:43:28 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Explaining an EXPLAIN." }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> If I look at the actual results of the outer-most join, the nested loop, \n> then I can take the number rows=344491124 and divide it by loops=361427 to \n> get rows=953. Clearly this means that on average each index scan on a3 \n> returned 953 rows.\n\nRight.\n\n> However, if I apply the same logic to the estimated results, it all falls \n> apart. The total estimated number of rows is remarkably accurate, as is \n> the estimated number of loops (results from the merge join). However the \n> average number of rows expected to be returned from the index scan is only \n> 47. I don't know how the planner is getting its accurate final estimate of \n> rows=330437962, because it is not from multiplying rows=361427 by rows=47. \n\nNo, it isn't. The rowcount estimate for an inner indexscan is derived\nbased on the index conditions that are assigned to the scan. It's not\nused for anything except estimating the cost of that indexscan; in\nparticular, the size of the join relation was estimated long before we\neven started to think about nestloop-with-inner-indexscan plans.\nI don't have time to look right now, but I seem to recall there are some\nconstraints that mean it's often not a very good estimate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Jun 2009 10:59:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explaining an EXPLAIN. " }, { "msg_contents": "On Wed, 10 Jun 2009, Tom Lane wrote:\n> ...the size of the join relation was estimated long before we even \n> started to think about nestloop-with-inner-indexscan plans.\n\nThat makes a lot of sense.\n\nMatthew\n\n-- \nRichards' Laws of Data Security:\n 1. Don't buy a computer.\n 2. If you must buy a computer, don't turn it on.\n", "msg_date": "Wed, 10 Jun 2009 16:02:03 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explaining an EXPLAIN. " } ]
[ { "msg_contents": "On Wed, 10 Jun 2009, Richard Huxton wrote:\n> Send it to the list again, and cc: me directly if you like. If it doesn't \n> show up in the next 20 minutes, I'll try sending it.\n\nOkay, here we go. I have (per Tom's advice) found some acknowledgement\nknobs on Majordomo. Here follows my original rejected mail:\n\nHi. I thought by now I would be fairly good at understanding EXPLAIN\nANALYSE results, but I can't quite figure this one out. Perhaps someone\ncould help me.\n\nEXPLAIN ANALYSE SELECT *\nFROM GeneGoAnnotation a1, GOAnnotation a2, OntologyTermRelations a3\nWHERE a1.GoAnnotation = a2.id AND a2.ontologyTermId = a3.OntologyTerm;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..673587.67 rows=330437962 width=95)\n (actual time=0.056..1924645.797 rows=344491124 loops=1)\n -> Merge Join (cost=0.00..28369.58 rows=361427 width=87)\n (actual time=0.039..4620.912 rows=361427 loops=1)\n Merge Cond: (a1.goannotation = a2.id)\n -> Index Scan using genegoannotation__goannotation on \ngenegoannotation a1\n (cost=0.00..9710.32 rows=361427 width=8)\n (actual time=0.015..840.547 rows=361427 loops=1)\n -> Index Scan using goannotation_pkey on goannotation a2\n (cost=0.00..13133.12 rows=403323 width=79)\n (actual time=0.014..1427.179 rows=403323 loops=1)\n -> Index Scan using ontologytermrelations__ontologyterm on \nontologytermrelations a3\n (cost=0.00..1.20 rows=47 width=8)\n (actual time=0.022..1.908 rows=953 loops=361427)\n Index Cond: (a3.ontologyterm = a2.ontologytermid)\n Total runtime: 2524647.064 ms\n(8 rows)\n\nIf I look at the actual results of the outer-most join, the nested loop,\nthen I can take the number rows=344491124 and divide it by loops=361427 to\nget rows=953. Clearly this means that on average each index scan on a3\nreturned 953 rows.\n\nHowever, if I apply the same logic to the estimated results, it all falls\napart. The total estimated number of rows is remarkably accurate, as is\nthe estimated number of loops (results from the merge join). However the\naverage number of rows expected to be returned from the index scan is only\n47. I don't know how the planner is getting its accurate final estimate of\nrows=330437962, because it is not from multiplying rows=361427 by rows=47.\nThat would only give 16987069 rows.\n\nAny ideas/explanations?\n\nMatthew\n\n-- \nNow, you would have thought these coefficients would be integers, given that\nwe're working out integer results. Using a fraction would seem really\nstupid. Well, I'm quite willing to be stupid here - in fact, I'm going to\nuse complex numbers. -- Computer Science Lecturer\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 10 Jun 2009 15:43:43 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "EXPLAIN understanding? (restarted from Censorship)" }, { "msg_contents": "Please ignore - Matthew has discovered what was blocking this message.\nUse his thread instead.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 10 Jun 2009 15:47:22 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN understanding? (restarted from Censorship)" } ]
[ { "msg_contents": "Hi,\n\nOur configuration is as follows:\n\n1. A staging server, which receives new data and updates the DB\n2. Two web servers that have copies of the DB (essentially read-only) and\nanswer user queries (with load balancer)\n\nCurrently we use dump (to SQL file, i.e. pg_dump with no args) + file copy\nto replicate the DB daily between the staging and web servers, and then\nrestore (via psql) the servers one at the time. In our application we expect\nthat average daily change is only to 3% of the records. My question is what\nwould be the best way to do this replication?\n\nI read about continuous archiving and\nPITR<http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html>.\nMy understanding however (e.g. from\nthis<http://archives.postgresql.org/pgsql-admin/2006-07/msg00279.php>)\nis that I cannot do a base backup once and then e.g. apply WAL files on a\ndaily basis, starting from yesterday's DB, but must instead redo the full\nbase backup before starting recovery?\n\nThen there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple\nreplication in Postgres 8.4 and other projects...\n\nSuggestions?\nThanks,\n\n-- Shaul\n\nHi,Our configuration is as follows:1. A staging server, which receives new data and updates the DB2. Two web servers that have copies of the DB (essentially read-only) and answer user queries (with load balancer)\nCurrently we use dump (to SQL file, i.e. pg_dump with no args) + file copy to replicate the DB daily between the staging and web servers, and then restore (via psql) the servers one at the time. In our application we expect that average daily change is only to 3% of the records. My question is what would be the best way to do this replication?\nI read about continuous archiving and PITR. My understanding however (e.g. from this) is that I cannot do a base backup once and then e.g. apply WAL files on a daily basis, starting from yesterday's DB, but must instead redo the full base backup before starting recovery?\nThen there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple replication in Postgres 8.4 and other projects...Suggestions?Thanks,-- Shaul", "msg_date": "Thu, 11 Jun 2009 16:12:32 +0300", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres replication: dump/restore, PITR, Slony,...?" }, { "msg_contents": "Hi,\n\nShaul Dar <[email protected]> writes:\n> 1. A staging server, which receives new data and updates the DB\n> 2. Two web servers that have copies of the DB (essentially read-only)\n> and answer user queries (with load balancer)\n\n[...]\n\n> Suggestions?\n\nI'd consider WAL Shipping for the staging server and some trigger based\nasynchronous replication for feeding the web servers.\n\nMore specifically, I'd have a try at Skytools, using walmgr.py for WAL\nShipping and Londiste for replication.\n http://wiki.postgresql.org/wiki/Skytools\n http://wiki.postgresql.org/wiki/Londiste_Tutorial\n\nRegards,\n-- \ndim\n", "msg_date": "Thu, 11 Jun 2009 16:33:04 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres replication: dump/restore, PITR, Slony,...?" }, { "msg_contents": "\n> \n> Then there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple\n> replication in Postgres 8.4 and other projects...\n\nCMO? :)\n\nJoshua D. Drake\n> \n> Suggestions?\n> Thanks,\n> \n> -- Shaul\n> \n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Thu, 11 Jun 2009 09:23:29 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres replication: dump/restore, PITR, Slony,...?" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n>> Then there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple\n>> replication in Postgres 8.4 and other projects...\n\n> CMO? :)\n\nBuchardo? :)\n\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation\nPGP Key: 0x14964AC8 200906111229\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAkoxMPkACgkQvJuQZxSWSsizywCbBtuo7cbCwmlHzvbi1kak9leF\nXwYAnA5dXlZqyyUOQrymXZf4yGJSMSq6\n=UPhb\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Thu, 11 Jun 2009 16:30:06 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres replication: dump/restore, PITR, Slony,...?" }, { "msg_contents": "On Thu, 2009-06-11 at 16:30 +0000, Greg Sabino Mullane wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: RIPEMD160\n> \n> >> Then there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple\n> >> replication in Postgres 8.4 and other projects...\n> \n> > CMO? :)\n> \n> Buchardo? :)\n\nA new desert, Buchardo CMO:\n\nTwo shots of brandy\nOne shot of rum\nVanilla Ice cream\nCherries\n\nBlend to perfection.\n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Thu, 11 Jun 2009 09:32:14 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres replication: dump/restore, PITR, Slony,...?" }, { "msg_contents": "All right, so I misspelled Bucardo (also Mammoth...), and the company's name\nis Command Prompt (please get someone to work on that incomprehensible logo\n- I went back and looked at it and still have no clue what it means :-).\n\nNow how about some serious answers relating to my questions?\n\nDimitri, thanks for your answer. I don't need to replicate TO the staging\nserver (this is where the changes happen) but rather FROM the staging server\nTO the Web (query) servers. I think my description wasn't clear enough.\nCurrently the staging DB changes daily as new records are inserted to it\n(would have liked to use COPY instead, but I believe that's only useful for\nbulk loading the whole DB, not appending to it?). Those changes need to be\nreflected on the Web servers. Today this is done via dump-copy files-restore\nof the whole DB (we shut down each Web server DB while restoring it,\nobviously), and I we are looking for a better way.\n\nI would truly appreciate specific suggestions and pointers/references (\"some\ntrigger based asynchronous replication\" doesn't help much...).\n\nAlso is my understanding of PITR limitations correct?\n\nThanks,\n\n-- Shaul\n\nOn Thu, Jun 11, 2009 at 7:32 PM, Joshua D. Drake <[email protected]>wrote:\n\n> On Thu, 2009-06-11 at 16:30 +0000, Greg Sabino Mullane wrote:\n> > -----BEGIN PGP SIGNED MESSAGE-----\n> > Hash: RIPEMD160\n> >\n> > >> Then there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple\n> > >> replication in Postgres 8.4 and other projects...\n> >\n> > > CMO? :)\n> >\n> > Buchardo? :)\n>\n> A new desert, Buchardo CMO:\n>\n> Two shots of brandy\n> One shot of rum\n> Vanilla Ice cream\n> Cherries\n>\n> Blend to perfection.\n>\n> Joshua D. Drake\n>\n\nAll right, so I misspelled Bucardo (also Mammoth...), and the company's name is Command Prompt (please get someone to work on that incomprehensible logo - I went back and looked at it and still have no clue what it means :-).\nNow how about some serious answers relating to my questions?Dimitri, thanks for your answer. I don't need to replicate TO the staging server (this is where the changes happen) but rather FROM the staging server TO the Web (query) servers. I think my description wasn't clear enough. Currently the staging DB changes daily as new records are inserted to it (would have liked to use COPY instead, but I believe that's only useful for bulk loading the whole DB, not appending to it?). Those changes need to be reflected on the Web servers. Today this is done via dump-copy files-restore of the whole DB (we shut down each Web server DB while restoring it, obviously), and I we are looking for a better way.\nI would truly appreciate specific suggestions and pointers/references (\"some trigger based asynchronous replication\" doesn't help much...).Also is my understanding of PITR limitations correct?\nThanks,-- Shaul\nOn Thu, Jun 11, 2009 at 7:32 PM, Joshua D. Drake <[email protected]> wrote:\nOn Thu, 2009-06-11 at 16:30 +0000, Greg Sabino Mullane wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: RIPEMD160\n>\n> >> Then there are Slony-I, Buchardo, Mamoth Replicator from CMO, simple\n> >> replication in Postgres 8.4 and other projects...\n>\n> > CMO? :)\n>\n> Buchardo? :)\n\nA new desert, Buchardo CMO:\n\nTwo shots of brandy\nOne shot of rum\nVanilla Ice cream\nCherries\n\nBlend to perfection.\n\nJoshua D. Drake", "msg_date": "Fri, 12 Jun 2009 14:27:10 +0300", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres replication: dump/restore, PITR, Slony,...?" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE----- \nHash: RIPEMD160 \n\n\n> Currently we use dump (to SQL file, i.e. pg_dump with no args) + file copy\n> to replicate the DB daily between the staging and web servers, and then\n> restore (via psql) the servers one at the time. In our application we expect\n> that average daily change is only to 3% of the records. My question is what\n> would be the best way to do this replication?\n\nBucardo should handle this easy enough. Just install Bucardo, tell it about the\ndatabases, tell it which tables to replicate, and start it up. If the tables\nhave unique indexes (e.g. PKs) you can use the 'pushdelta' type of sync, which\nwill copy rows as they change from the staging server to the web servers.\nIf the tables don't have unique indexes, you'll have to use the 'fullcopy'\nsync type, which, as you might imagine, copies the entire table each time.\n\nYou can further control both of these to fire automatically when the data\non the staging server changes, or to only fire when you tell it to, e.g.\nevery X minutes, or based on some other criteria. You can also configure\nhow many of the web servers get pushed to at one time, from 1 up to\nall of them.\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation\nPGP Key: 0x14964AC8 200906121509\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAkoyqFkACgkQvJuQZxSWSsjB8ACffcQRD+Vb7SV0RZnoo70hkpwB\nnycAn0QDiogs3EuCrc9+h4rMoToTFopz\n=Sltu\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Fri, 12 Jun 2009 19:11:42 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres replication: dump/restore, PITR, Slony,...?" } ]
[ { "msg_contents": "Hey folks,\n\nI'm new to performance monitoring and tuning of PG/Linux (have a fair\nbit of experience in Windows, though those skills were last used about\n5 years ago)\n\nI finally have Munin set up in my production environment, and my\ngoodness it tracks a whole whack of stuff by default!\n\nI want to turn off the graphing of unimportant data, to unclutter the\ngraphs and focus on what's important.\n\nSo, from the perspective of both Linux and PG, is there canonical list\nof \"here are the most important X things to track\" ?\n\nOn the PG side I currently have 1 graph for # connections, another for\nDB size, and another for TPS. Then there are a few more graphs that\nare really cluttered up, each with 8 or 9 things on them.\n\nOn the Linux side, I clearly want to track HD usage, CPU, memory. But\nnot sure what aspects of each. There is also a default Munin graph\nfor IO Stat - not sure what I am looking for there (I know what it\ndoes of course, just not sure what to look for in the numbers)\n\nI know some of this stuff was mentioned at PG Con so now I start going\nback through all my notes and the videos. Already been reviewing.\n\nIf there is not already a wiki page for this I'll write one. I see\nthis is a good general jump off point :\n\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\nBut jumping off from there (and searching on \"Performance\") does not\ncome up with anything like what I am talking about.\n\nIs there some good Linux performance monitoring and tuning reading\nthat you can recommend?\n\nthanks,\n-Alan\n\n-- \n“Mother Nature doesn’t do bailouts.”\n - Glenn Prickett\n", "msg_date": "Fri, 12 Jun 2009 15:52:19 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "what server stats to track / monitor ?" }, { "msg_contents": "On Fri, Jun 12, 2009 at 03:52:19PM -0400, Alan McKay wrote:\n> I want to turn off the graphing of unimportant data, to unclutter the\n> graphs and focus on what's important.\n\nI'm unfamiliar with Munin, but if you can turn off the graphing (so as to\nachieve your desired level of un-cluttered-ness) without disabling the capture\nof the data that was being graphed, you'll be better off. Others' opinions may\ncertainly vary, but in my experience, provided you're not causing a\nperformance problem simply because you're monitoring so much stuff, you're\nbest off capturing every statistic reasonably possible. The time will probably\ncome when you'll find that that statistic, and all the history you've been\ncapturing for it, becomes useful.\n\n- Josh / eggyknap", "msg_date": "Fri, 12 Jun 2009 14:34:41 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what server stats to track / monitor ?" }, { "msg_contents": "Hi Alan. For simple needs you can use Staplr, it's very easy to configure.\nThere's also one - zabbix, pretty much.\n\n2009/6/13 Alan McKay:\n> Hey folks,\n>\n> I'm new to performance monitoring and tuning of PG/Linux (have a fair\n> bit of experience in Windows, though those skills were last used about\n> 5 years ago)\n>\n> I finally have Munin set up in my production environment, and my\n> goodness it tracks a whole whack of stuff by default!\n>\n> I want to turn off the graphing of unimportant data, to unclutter the\n> graphs and focus on what's important.\n>\n> So, from the perspective of both Linux and PG, is there canonical list\n> of \"here are the most important X things to track\" ?\n>\n> On the PG side I currently have 1 graph for # connections, another for\n> DB size, and another for TPS.  Then there are a few more graphs that\n> are really cluttered up, each with 8 or 9 things on them.\n>\n> On the Linux side, I clearly want to track HD usage, CPU, memory.  But\n> not sure what aspects of each.  There is also a default Munin graph\n> for IO Stat - not sure what I am looking for there (I know what it\n> does of course, just not sure what to look for in the numbers)\n>\n> I know some of this stuff was mentioned at PG Con so now I start going\n> back through all my notes and the videos.  Already been reviewing.\n>\n> If there is not already a wiki page for this I'll write one.   I see\n> this is a good general jump off point :\n>\n> http://wiki.postgresql.org/wiki/Performance_Optimization\n>\n> But jumping off from there (and searching on \"Performance\") does not\n> come up with anything like what I am talking about.\n>\n> Is there some good Linux performance monitoring and tuning reading\n> that you can recommend?\n>\n> thanks,\n> -Alan\n>\n> --\n> “Mother Nature doesn’t do bailouts.”\n>         - Glenn Prickett\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sat, 13 Jun 2009 02:40:10 +0600", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what server stats to track / monitor ?" }, { "msg_contents": "> I'm unfamiliar with Munin, but if you can turn off the graphing (so as to\n> achieve your desired level of un-cluttered-ness) without disabling the capture\n> of the data that was being graphed, you'll be better off. Others' opinions may\n> certainly vary, but in my experience, provided you're not causing a\n> performance problem simply because you're monitoring so much stuff, you're\n> best off capturing every statistic reasonably possible. The time will probably\n> come when you'll find that that statistic, and all the history you've been\n> capturing for it, becomes useful.\n\nYes, Munin does allow me to turn off graphing without turning off collecting.\n\nAny pointers for good reading material here? Other tips?\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Fri, 12 Jun 2009 16:40:12 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what server stats to track / monitor ?" }, { "msg_contents": "Yes, I'm familiar with Staplr - if anyone from myyearbook.com is\nlistening in, I'm still hoping for that 0.7 update :-) I plan to run\nboth for the immediate term at least.\n\nBut this only concerns collecting - my biggest concern is how to\nread/interpret the data! Pointers to good reading material would be\ngreatly appreciated.\n\nOn Fri, Jun 12, 2009 at 4:40 PM, Rauan Maemirov<[email protected]> wrote:\n> Hi Alan. For simple needs you can use Staplr, it's very easy to configure.\n> There's also one - zabbix, pretty much.\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Fri, 12 Jun 2009 16:41:53 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what server stats to track / monitor ?" }, { "msg_contents": "On Fri, Jun 12, 2009 at 04:40:12PM -0400, Alan McKay wrote:\n> Any pointers for good reading material here? Other tips?\n\nThe manuals and/or source code for your software? Stories, case studies, and\nreports from others in similar situations who have gone through problems?\nMonitoring's job is to avert crises by letting you know things are going south\nbefore they die completely. So you probably want to figure out ways in which\nyour setup is most likely to die, and make sure the critical points in that\nequation are well-monitored, and you understand the monitoring. Provided you\nstick with it long enough, you'll inevitably encounter a breakdown of some\nkind or other, which will help you refine your idea of which points are\ncritical.\n\nApart from that, I find it's helpful to read about statistics and formal\ntesting, so you have some idea how confident you can be that the monitors are\naccurate, that your decisions are justified, etc. But that's not everyone's\ncup of tea...\n\n- Josh / eggyknap", "msg_date": "Sat, 13 Jun 2009 14:23:09 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what server stats to track / monitor ?" }, { "msg_contents": "On Fri, 12 Jun 2009, Alan McKay wrote:\n\n> So, from the perspective of both Linux and PG, is there canonical list\n> of \"here are the most important X things to track\" ?\n\nNot really, which is why you haven't gotten such a list from anyone here. \nExactly what's important to track does vary a bit based on expected \nworkload, and most of the people who have been through this enough to give \nyou a good answer are too busy to write one (you've been in my \"I should \nrespond to that\" queue for two weeks before I found time to write).\n\n> Is there some good Linux performance monitoring and tuning reading\n> that you can recommend?\n\nThe only good intro to this I've ever seen, from the perspective of \nmonitoring things would be useful to a database administrator, is the \ncoverage of monitoring in \"Performance Tuning for Linux Servers\" by \nJohnson/Huizenga/Pulavarty. Their tuning advice wasn't so useful, but \nmost OS tuning suggestions aren't either.\n\nThe more useful way to ask the question you'd like an answer to is \"when \nmy server starts to perform badly, what does that correlate with?\" Find \nout what you need to investigate to figure that out, and you can determine \nwhat you should have been monitoring all along. That is unfortunately \nworkload dependant; the stuff that tends to go wrong in a web app is very \ndifferent from what happens to a problematic data warehouse for example.\n\nThe basic important OS level stuff to watch is:\n\n-Total memory in use\n-All the CPU% numbers\n-Disk read/write MB/s at all levels of granularity you can collect (total \nacross the system, filesystem, array, individual disk). You'll only want \nto track the total until there's a problem, at which point it's nice to \nhave more data to drilldown into.\n\nThere's a bunch more disk and memory stats available, I rarely find them \nof any use. The one Linux specific bit I do like to monitor is the line \nlabeled \"Writeback\" in /proc/meminfo/ , because that's the best indicator \nof how much write cache is being done at the OS level. That's a warning \nsign of many problems in an area Linux often has problems with.\n\nOn the database side, you want to periodically check the important \npg_stat-* views to get an idea how much activity and happening (and where \nit's happening at), as well as looking for excessive dead tuples and bad \nindex utilization (which manifests by things like too many sequential \nscans):\n\n-pg_stat_user_indexes\n-pg_stat_user_tables\n-pg_statio_user_indexes\n-pg_statio_user_tables\n\nIf your system is write-intensive at all, you should watch \npg_stat_bgwriter too to keep an eye on when that goes badly.\n\nAt a higher level, it's a good idea to graph the size of the tables and \nindexes most important to your application over time.\n\nIt can be handy to track things derived from pg_stat_activity too, like \ntotal connections and how old the oldest transaction is. pg_locks can be \nhandy to track stats on too, something like these two counts over time:\n\nselect (select count(*) from pg_locks where granted) as granted,(select \ncount(*) from pg_locks where not granted) as ungranted;\n\nThat's the basic set I find myself looking at regularly enough that I wish \nI always had a historical record of them from the system. Never bothered \nto work this into a more formal article because a) the workload specific \nstuff makes it complicated to explain for everyone, b) the wide variation \nin and variety of monitoring tools out there, and c) wanting to cover the \nmaterial right which takes a while to do on a topic this big.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 26 Jun 2009 23:27:36 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what server stats to track / monitor ?" }, { "msg_contents": "Thanks Greg!\n\nOn Fri, Jun 26, 2009 at 11:27 PM, Greg Smith<[email protected]> wrote:\n\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Sun, 28 Jun 2009 09:52:24 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what server stats to track / monitor ?" } ]
[ { "msg_contents": "Hi,\n\nWhen we do a ps U postgres command, we find some connection in BIND status:\n\n10088 ? Ss 0:00 postgres: chk production xxx.xx.x.xx(48672) BIND\n10090 ? Ss 0:00 postgres: chk production xxx.xx.x.xx(48674) BIND\n\n\nWe are connecting to the database using pgpool for load balancing and slony\nfor replication. Postgres version - 8.3.3.\n\nThe queries running over this connections appear to be hanged and are not\nrunning.\n\nCan anybody tell what exactly the above connection status means, and how we\ncan get rid of this connections, pg_cancel_backend doesn't seem to help\nhere?\n\nLet me know I can provide some more info.\n\nRegards,\nNimesh.\n\nHi,When we do a ps U postgres command, we find some connection in BIND status:10088 ?        Ss     0:00 postgres: chk production xxx.xx.x.xx(48672) BIND10090 ?        Ss     0:00 postgres: chk production xxx.xx.x.xx(48674) BIND\nWe are connecting to the database using pgpool for load balancing and slony for replication. Postgres version - 8.3.3.The queries running over this connections appear to be hanged and are not running. \nCan anybody tell what exactly the above connection status means, and how we can get rid of this connections, pg_cancel_backend doesn't seem to help here?Let me know I can provide some more info.Regards,\nNimesh.", "msg_date": "Mon, 15 Jun 2009 15:00:15 +0530", "msg_from": "Nimesh Satam <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres connection status as BIND" }, { "msg_contents": "Nimesh Satam <[email protected]> writes:\n> When we do a ps U postgres command, we find some connection in BIND status:\n\n> 10088 ? Ss 0:00 postgres: chk production xxx.xx.x.xx(48672) BIND\n> 10090 ? Ss 0:00 postgres: chk production xxx.xx.x.xx(48674) BIND\n\n> Can anybody tell what exactly the above connection status means,\n\nIt's trying to do the Bind Parameters step of the prepared-query\nprotocol. AFAICS, a hang here means either a problem in parameter\nvalue conversion (have you got any custom datatypes with input functions\nthat maybe aren't debugged very well?) or something odd happening while\ntrying to plan or re-plan the query. Have you identified just which\nqueries hang up this way? (pg_stat_activity might help here.)\n\nOne fairly likely theory is that these are blocked trying to acquire\nlocks on tables their queries will reference. Have you looked into\npg_locks for evidence of someone sitting on exclusive locks?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Jun 2009 10:17:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres connection status as BIND " }, { "msg_contents": "On Monday 15 June 2009, Nimesh Satam <[email protected]> wrote:\n> Hi,\n>\n> When we do a ps U postgres command, we find some connection in BIND\n> status:\n>\n> 10088 ? Ss 0:00 postgres: chk production xxx.xx.x.xx(48672)\n> BIND 10090 ? Ss 0:00 postgres: chk production\n> xxx.xx.x.xx(48674) BIND\n>\n>\n> We are connecting to the database using pgpool for load balancing and\n> slony for replication. Postgres version - 8.3.3.\n\npgpool does that on some connections on some of my servers (but not others). \nI haven't been able to figure out why.\n\n\n-- \nWARNING: Do not look into laser with remaining eye.\n", "msg_date": "Mon, 15 Jun 2009 08:08:01 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres connection status as BIND" } ]
[ { "msg_contents": "Hi everybody, I'm creating my database on postgres and after some days\nof hard work I'm arrived to obtain good performance and owfull\nperformace with the same configuration.\nI have complex query that perform very well with mergejoin on and\nnestloop off.\nIf I activate nestloop postgres try to use it and the query execution\nbecome inconclusive: after 3 hours still no answare so I kill the query.\nTht's ok but, with this configuration, very simple and little query like\n\"slect colum from table where primarykey=value bacome incredibly slow.\nThe only solutionI found at the momento is to set mergejoin to off\nbefore doing this query.\nThat is an awfull solution because with that solution I have to change\nall the software (a big, old software) in the (many) points in witch\nthis kind of query are used (the same problem to set to off mergejoin\nfor all the system and activate it on che connection that have to make\nthe hard query).\nDo you have any suggestion to accelerate both complex and silply query?\nI've tried a lot of configuration in enabling different \"Planner Method\nConfiguration\" but the only combination that really accelerate hard\nquery is mergejoin on and nestloop off, other settings seems to be\nuseless.\nThank's in advance.\n\n", "msg_date": "Tue, 16 Jun 2009 15:37:42 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "performance with query" }, { "msg_contents": "On Tue, Jun 16, 2009 at 03:37:42PM +0200, Alberto Dalmaso wrote:\n> Hi everybody, I'm creating my database on postgres and after some days\n> of hard work I'm arrived to obtain good performance and owfull\n> performace with the same configuration.\n> I have complex query that perform very well with mergejoin on and\n> nestloop off.\n> If I activate nestloop postgres try to use it and the query execution\n> become inconclusive: after 3 hours still no answare so I kill the query.\n> Tht's ok but, with this configuration, very simple and little query like\n> \"slect colum from table where primarykey=value bacome incredibly slow.\n> The only solutionI found at the momento is to set mergejoin to off\n> before doing this query.\n> That is an awfull solution because with that solution I have to change\n> all the software (a big, old software) in the (many) points in witch\n> this kind of query are used (the same problem to set to off mergejoin\n> for all the system and activate it on che connection that have to make\n> the hard query).\n> Do you have any suggestion to accelerate both complex and silply query?\n> I've tried a lot of configuration in enabling different \"Planner Method\n> Configuration\" but the only combination that really accelerate hard\n> query is mergejoin on and nestloop off, other settings seems to be\n> useless.\n> Thank's in advance.\n\nIt would be helpful if you posted EXPLAIN ANALYZE results for both queries.\nThis will require you to run each query to completion; if that's not possible\nfor the 3 hour query, at least run EXPLAIN and post those results.\n\n- Josh / eggyknap", "msg_date": "Tue, 16 Jun 2009 08:13:22 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote: \n \n> I have complex query that perform very well with mergejoin on and\n> nestloop off.\n> If I activate nestloop postgres try to use it and the query\n> execution become inconclusive: after 3 hours still no answare so I\n> kill the query.\n> Tht's ok but, with this configuration, very simple and little query\n> like \"slect colum from table where primarykey=value bacome\n> incredibly slow.\n> The only solutionI found at the momento is to set mergejoin to off\n> before doing this query.\n \nWe'll need a lot more information to be able to provide useful\nadvice.\n \nWhat version of PostgreSQL?\n \nWhat OS?\n \nWhat does the hardware look like? (CPUs, drives, memory, etc.)\n \nDo you have autovacuum running? What other regular maintenance to you\ndo?\n \nWhat does your postgresql.conf file look like? (If you can strip out\nall comments and show the rest, that would be great.)\n \nWith that as background, if you can show us the schema for the\ntable(s) involved and the text of a query, along with the EXPLAIN\nANALYZE output (or just EXPLAIN, if the query runs too long to get the\nEXPLAIN ANALYZE results) that would allow us to wee where things are\ngoing wrong. Please show this information without setting any of the\noptimizer options off; but then, as a diagnostic step, *also* show\nEXPLAIN ANALYZE results when you set options to a configuration that\nruns faster.\n \n-Kevin\n", "msg_date": "Tue, 16 Jun 2009 09:21:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "> What version of PostgreSQL?\n8.3 that comes with opensuse 11.1\n> \n> What OS?\nLinux, opensuse 11.1 64 bit\n> \n> What does the hardware look like? (CPUs, drives, memory, etc.)\n2 * opteron dual core 8 GB RAM, 70 GB SCSI U320 RAID 1\n> \n> Do you have autovacuum running? What other regular maintenance to you\n> do?\nYES, autovacuum and analyze are running, the only other activity is the\nwal backup\n> \n> What does your postgresql.conf file look like? (If you can strip out\n> all comments and show the rest, that would be great.)\n\nI'll post only the value I've changed\n\nshared_buffers = 1536MB \ntemp_buffers = 5MB \nmax_prepared_transactions = 30 \n \nwork_mem = 50MB # I've lot of work in order by\nmaintenance_work_mem =50MB \nmax_stack_depth = 6MB \n\nmax_fsm_pages = 160000 \nmax_fsm_relations = 5000\n\nwal_buffers = 3072kB \n\nenable_bitmapscan = on\nenable_hashagg = on\nenable_hashjoin = off\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = off\nenable_seqscan = off\nenable_sort = off\nenable_tidscan = on\n\n\neffective_cache_size = 3600MB\n\ngeqo = off\ndefault_statistics_target = 100\n\n> \n> With that as background, if you can show us the schema for the\n> table(s) involved and the text of a query, along with the EXPLAIN\n> ANALYZE output (or just EXPLAIN, if the query runs too long to get the\n> EXPLAIN ANALYZE results) that would allow us to wee where things are\n> going wrong. Please show this information without setting any of the\n> optimizer options off; but then, as a diagnostic step, *also* show\n> EXPLAIN ANALYZE results when you set options to a configuration that\n> runs faster.\n> \n> -Kevin\n\nThe problem is that in the simply query it uses mergejoin instead of\nnastedloop (obvious for the parameters I set) but in this situation in\nbecomes very very slow (15 sec vs 5 ms when I set to off mergejoin).\n\nThat is the explain of the complex query that works with more than\nacceptable performance\n\n\"Merge Right Join (cost=508603077.17..508603195.59 rows=1 width=227)\"\n\" Merge Cond: (ve_edil_rendite.id_domanda = domande.id_domanda)\"\n\" -> GroupAggregate (cost=0.00..105.51 rows=1031 width=11)\"\n\" -> Index Scan using pk_ve_edil_rendite on ve_edil_rendite\n(cost=0.00..86.84 rows=1157 width=11)\"\n\" -> Materialize (cost=508603077.17..508603077.18 rows=1 width=195)\"\n\" -> Nested Loop (cost=506932259.90..508603077.17 rows=1\nwidth=195)\"\n\" -> Merge Join (cost=406932259.90..408603074.89 rows=1\nwidth=188)\"\n\" Merge Cond: (domande.id_domanda =\nc_elaout_7.id_domanda)\"\n\" -> Merge Join (cost=406932259.90..408188339.97\nrows=1 width=240)\"\n\" Merge Cond: (c_elaout_5.id_domanda =\ndomande.id_domanda)\"\n\" -> Merge Join (cost=3895.15..1259628.81\nrows=138561 width=41)\"\n\" Merge Cond: (edil_veneto.id_domanda =\nc_elaout_5.id_domanda)\"\n\" -> Merge Join\n(cost=1123.18..372710.75 rows=98122 width=29)\"\n\" Merge Cond:\n(edil_veneto.id_domanda = c_elaout_6.id_domanda)\"\n\" -> Index Scan using\n\"IDX_pk_Edil_Veneto\" on edil_veneto (cost=0.00..11825.14 rows=232649\nwidth=17)\"\n\" -> Index Scan using\n\"IDX_3_c_elaout\" on c_elaout c_elaout_6 (cost=0.00..359914.34\nrows=98122 width=12)\"\n\" Index Cond:\n((c_elaout_6.node)::text = 'contributo_sociale'::text)\"\n\" -> Index Scan using \"IDX_3_c_elaout\"\non c_elaout c_elaout_5 (cost=0.00..887091.20 rows=245306 width=12)\"\n\" Index Cond:\n((c_elaout_5.node)::text = 'contributo'::text)\"\n\" -> Materialize\n(cost=406928364.74..406928364.75 rows=1 width=199)\"\n\" -> Nested Loop\n(cost=402583154.89..406928364.74 rows=1 width=199)\"\n\" Join Filter:\n((r_enti.codice_ente)::text = (r_luoghi.cod_catastale)::text)\"\n\" -> Merge Join\n(cost=202583154.89..206928031.60 rows=1 width=198)\"\n\" Merge Cond:\n(domande.id_domanda = c_elaout_4.id_domanda)\"\n\" -> Merge Join\n(cost=202583154.89..206425374.54 rows=1 width=186)\"\n\" Merge Cond:\n(domande.id_domanda = c_elain_3.id_domanda)\"\n\" -> Merge Join\n(cost=201328203.80..205170407.27 rows=41 width=138)\"\n\" Merge Cond:\n(domande.id_domanda = c_elain_7.id_domanda)\"\n\" -> Merge Join\n(cost=201328203.80..204498966.35 rows=93 width=126)\"\n\" Merge\nCond: (domande.id_domanda = c_elain_9.id_domanda)\"\n\" -> Merge\nJoin (cost=201322293.83..203828121.81 rows=424 width=114)\"\n\"\nMerge Cond: (domande.id_domanda = c_elain_8.id_domanda)\"\n\" ->\nNested Loop (cost=201318498.02..203164011.74 rows=2431 width=102)\"\n\"\n-> Merge Join (cost=101318498.02..103147289.10 rows=2431 width=79)\"\n\"\nMerge Cond: (domande.id_domanda = doc.id)\"\n\"\n-> Merge Join (cost=101318487.80..103060677.64 rows=2493 width=75)\"\n\"\nMerge Cond: (domande.id_domanda = c_elain_1.id_domanda)\"\n\"\n-> Merge Join (cost=101316002.90..102447327.03 rows=15480 width=63)\"\n\"\nMerge Cond: (domande.id_domanda = c_elain.id_domanda)\"\n\"\n-> Merge Join (cost=101314975.72..101780946.74 rows=88502 width=51)\"\n\"\nMerge Cond: (c_elain_2.id_domanda = domande.id_domanda)\"\n\"\n-> Index Scan using \"IDX_1_c_elain\" on c_elain c_elain_2\n(cost=0.00..461104.96 rows=129806 width=12)\"\n\"\nIndex Cond: ((node)::text = 'N_componenti'::text)\"\n\"\n-> Sort (cost=101314967.66..101316800.15 rows=732995 width=39)\"\n\"\nSort Key: domande.id_domanda\"\n\"\n-> Merge Join (cost=119414.31..1243561.32 rows=732995 width=39)\"\n\"\nMerge Cond: (domande.id_dichiarazione =\ngeneriche_data_nascita_piu_anziano.id_dichiarazione)\"\n\"\n-> Merge Join (cost=18770.82..1126115.64 rows=123933 width=39)\"\n\"\nMerge Cond: (domande.id_dichiarazione = c_elaout.id_domanda)\"\n\"\n-> Index Scan using \"IDX_5_domande\" on domande (cost=0.00..91684.40\nrows=31967 width=27)\"\n\"\nIndex Cond: (id_servizio = 11002)\"\n\"\nFilter: (id_ente > 0)\"\n\"\n-> Index Scan using \"IDX_2_c_elaout\" on c_elaout\n(cost=0.00..1031179.16 rows=805279 width=12)\"\n\"\nFilter: ((c_elaout.node)::text = 'ISEE'::text)\"\n\"\n-> Materialize (cost=100643.49..106653.58 rows=601009 width=12)\"\n\"\n-> Subquery Scan generiche_data_nascita_piu_anziano\n(cost=0.00..100042.48 rows=601009 width=12)\"\n\"\n-> GroupAggregate (cost=0.00..94032.39 rows=601009 width=12)\"\n\"\n-> Index Scan using \"IDX_1_componenti\" on componenti\n(cost=0.00..76403.45 rows=2023265 width=12)\"\n\"\n-> Index Scan using \"IDX_1_c_elain\" on c_elain (cost=0.00..665581.51\nrows=188052 width=12)\"\n\"\nIndex Cond: ((c_elain.node)::text = 'VSE'::text)\"\n\"\n-> Index Scan using \"IDX_1_c_elain\" on c_elain c_elain_1\n(cost=0.00..613000.48 rows=173074 width=12)\"\n\"\nIndex Cond: ((c_elain_1.node)::text = 'AffittoISEE'::text)\"\n\"\n-> Index Scan using pk_doc on doc (cost=0.00..81963.12 rows=1847118\nwidth=4)\"\n\"\nFilter: (doc.id_tp_stato_doc = 1)\"\n\"\n-> Index Scan using \"IDX_pk_R_Enti\" on r_enti (cost=0.00..6.87 rows=1\nwidth=31)\"\n\"\nIndex Cond: (r_enti.id_ente = domande.id_ente)\"\n\" ->\nIndex Scan using \"IDX_1_c_elain\" on c_elain c_elain_8\n(cost=0.00..663631.02 rows=187497 width=12)\"\n\"\nIndex Cond: ((c_elain_8.node)::text = 'Spese'::text)\"\n\" -> Index\nScan using \"IDX_2_c_elain\" on c_elain c_elain_9 (cost=0.00..670253.16\nrows=235758 width=12)\"\n\"\nFilter: ((c_elain_9.node)::text = 'Mesi'::text)\"\n\" -> Index Scan\nusing \"IDX_2_c_elain\" on c_elain c_elain_7 (cost=0.00..670253.16\nrows=474845 width=12)\"\n\" Filter:\n((c_elain_7.node)::text = 'Affitto'::text)\"\n\" -> Materialize\n(cost=1254951.09..1254963.95 rows=1286 width=48)\"\n\" -> Merge Join\n(cost=2423.84..1254949.80 rows=1286 width=48)\"\n\" Merge\nCond: (c_elain_3.id_domanda = c_elaout_1.id_domanda)\"\n\" -> Merge\nJoin (cost=1094.64..606811.53 rows=1492 width=36)\"\n\"\nMerge Cond: (c_elain_3.id_domanda = c_elaout_3.id_domanda)\"\n\" ->\nMerge Join (cost=224.20..182997.39 rows=2667 width=24)\"\n\"\nMerge Cond: (c_elain_3.id_domanda = c_elaout_2.id_domanda)\"\n\"\n-> Index Scan using \"IDX_1_c_elain\" on c_elain c_elain_3\n(cost=0.00..74101.14 rows=19621 width=12)\"\n\"\nIndex Cond: ((node)::text = 'Solo_anziani'::text)\"\n\"\n-> Index Scan using \"IDX_3_c_elaout\" on c_elaout c_elaout_2\n(cost=0.00..108761.74 rows=28155 width=12)\"\n\"\nIndex Cond: ((c_elaout_2.node)::text = 'ise_fsa'::text)\"\n\" ->\nIndex Scan using \"IDX_3_c_elaout\" on c_elaout c_elaout_3\n(cost=0.00..423543.07 rows=115886 width=12)\"\n\"\nIndex Cond: ((c_elaout_3.node)::text = 'incidenza'::text)\"\n\" -> Index\nScan using \"IDX_3_c_elaout\" on c_elaout c_elaout_1\n(cost=0.00..647740.85 rows=178481 width=12)\"\n\"\nIndex Cond: ((c_elaout_1.node)::text = 'isee_fsa'::text)\"\n\" -> Index Scan using\n\"IDX_3_c_elaout\" on c_elaout c_elaout_4 (cost=0.00..502312.35\nrows=137879 width=12)\"\n\" Index Cond:\n((c_elaout_4.node)::text = 'esito'::text)\"\n\" -> Seq Scan on r_luoghi\n(cost=100000000.00..100000200.84 rows=10584 width=11)\"\n\" -> Index Scan using \"IDX_3_c_elaout\" on c_elaout\nc_elaout_7 (cost=0.00..414451.53 rows=113348 width=12)\"\n\" Index Cond: ((c_elaout_7.node)::text =\n'contributo_regolare'::text)\"\n\" -> Index Scan using \"IDX_pk_VE_EDIL_tp_superfici\" on\nve_edil_tp_superfici (cost=0.00..2.27 rows=1 width=11)\"\n\" Index Cond: (ve_edil_tp_superfici.id_tp_superficie\n= edil_veneto.id_tp_superficie)\"\n\n\n\nand that is the explain of the too slow simple query\n\n\"Merge Join (cost=0.00..1032305.52 rows=4 width=12)\"\n\" Merge Cond: (domande.id_dichiarazione = c_elaout.id_domanda)\"\n\" -> Index Scan using \"IDX_8_domande\" on domande (cost=0.00..8.39\nrows=1 width=4)\"\n\" Index Cond: (id_domanda = 4165757)\"\n\" -> Index Scan using \"IDX_2_c_elaout\" on c_elaout\n(cost=0.00..1030283.89 rows=805279 width=12)\"\n\" Filter: ((c_elaout.node)::text = 'Invalido'::text)\"\n\nthis cost 15 sec\n\n\nwith mergejoin to off:\n\n\"Nested Loop (cost=100000000.00..100000022.97 rows=4 width=12)\"\n\" -> Index Scan using \"IDX_8_domande\" on domande (cost=0.00..8.39\nrows=1 width=4)\"\n\" Index Cond: (id_domanda = 4165757)\"\n\" -> Index Scan using \"IDX_2_c_elaout\" on c_elaout (cost=0.00..14.54\nrows=4 width=12)\"\n\" Index Cond: (c_elaout.id_domanda = domande.id_dichiarazione)\"\n\" Filter: ((c_elaout.node)::text = 'Invalido'::text)\"\n\nthis cost 15 msec!!!\n\nThis query work fine even with\nset enable_mergejoin='on';\nset enable_nestloop='on';\n\n\"Nested Loop (cost=0.00..22.97 rows=4 width=12) (actual\ntime=10.110..10.122 rows=1 loops=1)\"\n\" -> Index Scan using \"IDX_8_domande\" on domande (cost=0.00..8.39\nrows=1 width=4) (actual time=0.071..0.075 rows=1 loops=1)\"\n\" Index Cond: (id_domanda = 4165757)\"\n\" -> Index Scan using \"IDX_2_c_elaout\" on c_elaout (cost=0.00..14.54\nrows=4 width=12) (actual time=10.029..10.031 rows=1 loops=1)\"\n\" Index Cond: (c_elaout.id_domanda = domande.id_dichiarazione)\"\n\" Filter: ((c_elaout.node)::text = 'Invalido'::text)\"\n\"Total runtime: 10.211 ms\"\n\n\nbut in this situation the previous kind of query doesn't arrive at the\nend and the plan becomes:\n\"Merge Right Join (cost=100707011.72..100707130.15 rows=1 width=227)\"\n\" Merge Cond: (ve_edil_rendite.id_domanda = domande.id_domanda)\"\n\" -> GroupAggregate (cost=0.00..105.51 rows=1031 width=11)\"\n\" -> Index Scan using pk_ve_edil_rendite on ve_edil_rendite\n(cost=0.00..86.84 rows=1157 width=11)\"\n\" -> Materialize (cost=100707011.72..100707011.73 rows=1 width=195)\"\n\" -> Nested Loop (cost=100689558.36..100707011.72 rows=1\nwidth=195)\"\n\" -> Nested Loop (cost=100689558.36..100706997.17 rows=1\nwidth=247)\"\n\" -> Nested Loop (cost=100689558.36..100706982.62\nrows=1 width=235)\"\n\" Join Filter: ((r_enti.codice_ente)::text =\n(r_luoghi.cod_catastale)::text)\"\n\" -> Nested Loop (cost=689558.36..706649.48\nrows=1 width=234)\"\n\" -> Nested Loop\n(cost=689558.36..706647.20 rows=1 width=227)\"\n\" -> Nested Loop\n(cost=689558.36..706632.65 rows=1 width=215)\"\n\" -> Nested Loop\n(cost=689558.36..706618.10 rows=1 width=203)\"\n\" Join Filter:\n(domande.id_domanda = edil_veneto.id_domanda)\"\n\" -> Index Scan using\n\"IDX_pk_Edil_Veneto\" on edil_veneto (cost=0.00..11825.14 rows=232649\nwidth=17)\"\n\" -> Materialize\n(cost=689558.36..689558.37 rows=1 width=186)\"\n\" -> Nested Loop\n(cost=100643.49..689558.36 rows=1 width=186)\"\n\" ->\nNested Loop (cost=100643.49..689543.81 rows=1 width=174)\"\n\" ->\nNested Loop (cost=100643.49..689530.86 rows=1 width=162)\"\n\"\n-> Nested Loop (cost=100643.49..689517.93 rows=1 width=150)\"\n\"\n-> Nested Loop (cost=100643.49..689505.01 rows=1 width=138)\"\n\"\n-> Nested Loop (cost=100643.49..689490.46 rows=1 width=126)\"\n\"\n-> Nested Loop (cost=100643.49..688816.73 rows=44 width=114)\"\n\"\n-> Merge Join (cost=100643.49..657277.54 rows=2431 width=102)\"\n\"\nMerge Cond: (domande.id_dichiarazione =\ngeneriche_data_nascita_piu_anziano.id_dichiarazione)\"\n\"\n-> Nested Loop (cost=0.00..549096.04 rows=412 width=102)\"\n\"\n-> Nested Loop (cost=0.00..547345.02 rows=106 width=90)\"\n\"\n-> Nested Loop (cost=0.00..546615.85 rows=106 width=67)\"\n\"\n-> Nested Loop (cost=0.00..545694.51 rows=109 width=63)\"\n\"\n-> Nested Loop (cost=0.00..537605.96 rows=621 width=51)\"\n\"\n-> Nested Loop (cost=0.00..487675.59 rows=3860 width=39)\"\n\"\n-> Index Scan using \"IDX_5_domande\" on domande (cost=0.00..91684.40\nrows=31967 width=27)\"\n\"\nIndex Cond: (id_servizio = 11002)\"\n\"\nFilter: (id_ente > 0)\"\n\"\n-> Index Scan using \"IDX_1_c_elain\" on c_elain c_elain_2\n(cost=0.00..12.37 rows=1 width=12)\"\n\"\nIndex Cond: (((c_elain_2.node)::text = 'N_componenti'::text) AND\n(c_elain_2.id_domanda = domande.id_domanda))\"\n\"\n-> Index Scan using \"IDX_1_c_elain\" on c_elain c_elain_1\n(cost=0.00..12.92 rows=1 width=12)\"\n\"\nIndex Cond: (((c_elain_1.node)::text = 'AffittoISEE'::text) AND\n(c_elain_1.id_domanda = domande.id_domanda))\"\n\"\n-> Index Scan using \"IDX_1_c_elain\" on c_elain (cost=0.00..13.01\nrows=1 width=12)\"\n\"\nIndex Cond: (((c_elain.node)::text = 'VSE'::text) AND\n(c_elain.id_domanda = domande.id_domanda))\"\n\"\n-> Index Scan using pk_doc on doc (cost=0.00..8.44 rows=1 width=4)\"\n\"\nIndex Cond: (doc.id = domande.id_domanda)\"\n\"\nFilter: (doc.id_tp_stato_doc = 1)\"\n\"\n-> Index Scan using \"IDX_pk_R_Enti\" on r_enti (cost=0.00..6.87 rows=1\nwidth=31)\"\n\"\nIndex Cond: (r_enti.id_ente = domande.id_ente)\"\n\"\n-> Index Scan using \"IDX_2_c_elaout\" on c_elaout (cost=0.00..16.47\nrows=4 width=12)\"\n\"\nIndex Cond: (c_elaout.id_domanda = domande.id_dichiarazione)\"\n\"\nFilter: ((c_elaout.node)::text = 'ISEE'::text)\"\n\"\n-> Materialize (cost=100643.49..106653.58 rows=601009 width=12)\"\n\"\n-> Subquery Scan generiche_data_nascita_piu_anziano\n(cost=0.00..100042.48 rows=601009 width=12)\"\n\"\n-> GroupAggregate (cost=0.00..94032.39 rows=601009 width=12)\"\n\"\n-> Index Scan using \"IDX_1_componenti\" on componenti\n(cost=0.00..76403.45 rows=2023265 width=12)\"\n\"\n-> Index Scan using \"IDX_1_c_elain\" on c_elain c_elain_3\n(cost=0.00..12.96 rows=1 width=12)\"\n\"\nIndex Cond: (((c_elain_3.node)::text = 'Solo_anziani'::text) AND\n(c_elain_3.id_domanda = domande.id_domanda))\"\n\"\n-> Index Scan using \"IDX_3_c_elaout\" on c_elaout c_elaout_2\n(cost=0.00..15.30 rows=1 width=12)\"\n\"\nIndex Cond: (((c_elaout_2.node)::text = 'ise_fsa'::text) AND\n(c_elaout_2.id_domanda = domande.id_domanda))\"\n\"\n-> Index Scan using \"IDX_2_c_elaout\" on c_elaout c_elaout_3\n(cost=0.00..14.54 rows=1 width=12)\"\n\"\nIndex Cond: (c_elaout_3.id_domanda = domande.id_domanda)\"\n\"\nFilter: ((c_elaout_3.node)::text = 'incidenza'::text)\"\n\"\n-> Index Scan using \"IDX_2_c_elain\" on c_elain c_elain_9\n(cost=0.00..12.91 rows=1 width=12)\"\n\"\nIndex Cond: (c_elain_9.id_domanda = domande.id_domanda)\"\n\"\nFilter: ((c_elain_9.node)::text = 'Mesi'::text)\"\n\"\n-> Index Scan using \"IDX_2_c_elain\" on c_elain c_elain_8\n(cost=0.00..12.91 rows=1 width=12)\"\n\"\nIndex Cond: (c_elain_8.id_domanda = domande.id_domanda)\"\n\"\nFilter: ((c_elain_8.node)::text = 'Spese'::text)\"\n\" ->\nIndex Scan using \"IDX_2_c_elain\" on c_elain c_elain_7 (cost=0.00..12.91\nrows=3 width=12)\"\n\"\nIndex Cond: (c_elain_7.id_domanda = domande.id_domanda)\"\n\"\nFilter: ((c_elain_7.node)::text = 'Affitto'::text)\"\n\" -> Index\nScan using \"IDX_2_c_elaout\" on c_elaout c_elaout_1 (cost=0.00..14.54\nrows=1 width=12)\"\n\"\nIndex Cond: (c_elaout_1.id_domanda = domande.id_domanda)\"\n\"\nFilter: ((c_elaout_1.node)::text = 'isee_fsa'::text)\"\n\" -> Index Scan using\n\"IDX_2_c_elaout\" on c_elaout c_elaout_7 (cost=0.00..14.54 rows=1\nwidth=12)\"\n\" Index Cond:\n(c_elaout_7.id_domanda = domande.id_domanda)\"\n\" Filter:\n((c_elaout_7.node)::text = 'contributo_regolare'::text)\"\n\" -> Index Scan using\n\"IDX_2_c_elaout\" on c_elaout c_elaout_6 (cost=0.00..14.54 rows=1\nwidth=12)\"\n\" Index Cond:\n(c_elaout_6.id_domanda = domande.id_domanda)\"\n\" Filter:\n((c_elaout_6.node)::text = 'contributo_sociale'::text)\"\n\" -> Index Scan using\n\"IDX_pk_VE_EDIL_tp_superfici\" on ve_edil_tp_superfici (cost=0.00..2.27\nrows=1 width=11)\"\n\" Index Cond:\n(ve_edil_tp_superfici.id_tp_superficie = edil_veneto.id_tp_superficie)\"\n\" -> Seq Scan on r_luoghi\n(cost=100000000.00..100000200.84 rows=10584 width=11)\"\n\" -> Index Scan using \"IDX_2_c_elaout\" on c_elaout\nc_elaout_5 (cost=0.00..14.54 rows=1 width=12)\"\n\" Index Cond: (c_elaout_5.id_domanda =\ndomande.id_domanda)\"\n\" Filter: ((c_elaout_5.node)::text =\n'contributo'::text)\"\n\" -> Index Scan using \"IDX_2_c_elaout\" on c_elaout\nc_elaout_4 (cost=0.00..14.54 rows=1 width=12)\"\n\" Index Cond: (c_elaout_4.id_domanda =\ndomande.id_domanda)\"\n\" Filter: ((c_elaout_4.node)::text = 'esito'::text)\"\n\n\n\nReally thanks for your interest and your help!\n\n\n", "msg_date": "Tue, 16 Jun 2009 16:45:53 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" }, { "msg_contents": "On Tue, 16 Jun 2009, Alberto Dalmaso wrote:\n>> What does your postgresql.conf file look like?\n\n> enable_hashjoin = off\n> enable_nestloop = off\n> enable_seqscan = off\n> enable_sort = off\n\nWhy are these switched off?\n\n> and that is the explain of the too slow simple query\n>\n> \"Merge Join (cost=0.00..1032305.52 rows=4 width=12)\"\n> \" Merge Cond: (domande.id_dichiarazione = c_elaout.id_domanda)\"\n> \" -> Index Scan using \"IDX_8_domande\" on domande (cost=0.00..8.39\n> rows=1 width=4)\"\n> \" Index Cond: (id_domanda = 4165757)\"\n> \" -> Index Scan using \"IDX_2_c_elaout\" on c_elaout\n> (cost=0.00..1030283.89 rows=805279 width=12)\"\n> \" Filter: ((c_elaout.node)::text = 'Invalido'::text)\"\n>\n> this cost 15 sec\n>\n>\n> with mergejoin to off:\n>\n> \"Nested Loop (cost=100000000.00..100000022.97 rows=4 width=12)\"\n> \" -> Index Scan using \"IDX_8_domande\" on domande (cost=0.00..8.39\n> rows=1 width=4)\"\n> \" Index Cond: (id_domanda = 4165757)\"\n> \" -> Index Scan using \"IDX_2_c_elaout\" on c_elaout (cost=0.00..14.54\n> rows=4 width=12)\"\n> \" Index Cond: (c_elaout.id_domanda = domande.id_dichiarazione)\"\n> \" Filter: ((c_elaout.node)::text = 'Invalido'::text)\"\n>\n> this cost 15 msec!!!\n\nWell duh. What you're effectively doing is telling Postgres to NEVER use a \nnested loop. Then you're getting upset because it isn't using a nested \nloop. When you tell it to NEVER use anything (switching all join \nalgorithms off), it ignores you and chooses the right plan anyway.\n\nMatthew\n\n-- \n You can configure Windows, but don't ask me how. -- Bill Gates\n", "msg_date": "Tue, 16 Jun 2009 15:58:56 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote: \n \n>> What version of PostgreSQL?\n> 8.3 that comes with opensuse 11.1\n \nCould you show us the result of SELECT version(); ?\n \n> max_prepared_transactions = 30\n \nUnless you're using distributed transactions or need a lot of locks,\nthat's just going to waste some RAM. Zero is fine for most people.\n \n> maintenance_work_mem =50MB\n \nThat's a little small -- this only comes into play for maintenance\ntasks like index builds. Not directly part of your reported problem,\nbut maybe something to bump to the 1GB range.\n \n> max_fsm_pages = 160000\n> max_fsm_relations = 5000\n \nHave you done any VACUUM VERBOSE lately and captured the output? If\nso, what do the last few lines say? (That's a lot of relations for\nthe number of pages; just curious how it maps to actual.)\n \n> enable_hashjoin = off\n> enable_nestloop = off\n> enable_seqscan = off\n> enable_sort = off\n \nThat's probably a bad idea. If particular queries aren't performing\nwell, you can always set these temporarily on a particular connection.\nEven then, turning these off is rarely a good idea except for\ndiagnostic purposes. I *strongly* recommend you put all of these back\nto the defaults of 'on' and start from there, turning off selected\nitems as needed to get EXPLAIN ANALYZE output to demonstrate the\nbetter plans you've found for particular queries.\n \n> effective_cache_size = 3600MB\n \nThat seems a little on the low side for an 8GB machine, unless you\nhave other things on there using a lot of RAM. Do you?\n \nIf you could set the optimizer options back on and get new plans where\nyou show specifically which options (if any) where turned off for the\nrun, that would be good. Also, please attach the plans to the email\ninstead of pasting -- the word wrap makes them hard to read. Finally,\nif you could do \\d on the tables involved in the query, it would help.\nI'll hold off looking at these in hopes that you can do the above.\n \n-Kevin\n", "msg_date": "Tue, 16 Jun 2009 10:06:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "Il giorno mar, 16/06/2009 alle 15.58 +0100, Matthew Wakeling ha scritto:\n> On Tue, 16 Jun 2009, Alberto Dalmaso wrote:\n> >> What does your postgresql.conf file look like?\n> \n> > enable_hashjoin = off\n> > enable_nestloop = off\n> > enable_seqscan = off\n> > enable_sort = off\n> \n> Why are these switched off?\n> \nbecause of the need to pump up the performance of the complex query. If\nI set then to on then it try to use nasted loop even in the complex\nquery and that query does never arrive to a response.... and, of course,\nI need a response from it!!!\nSo my problem is to find a configuration taht save performance for all\nthe two kind of query, but I'm not abble to find it.\nMove to parameters of the RAM can save a 10% of the time in the complex\nquery, wile I have no changes on the simple one...\n\n", "msg_date": "Tue, 16 Jun 2009 17:12:54 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> writes:\n> Il giorno mar, 16/06/2009 alle 15.58 +0100, Matthew Wakeling ha scritto:\n>>> enable_hashjoin = off\n>>> enable_nestloop = off\n>>> enable_seqscan = off\n>>> enable_sort = off\n>> \n>> Why are these switched off?\n>> \n> because of the need to pump up the performance of the complex query.\n\nThat is *not* the way to improve performance of a query. Turning off\nspecific enable_ parameters can be helpful while investigating planner\nbehavior, but it is never recommended as a production solution. You\nhave already found out why.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jun 2009 11:31:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query " }, { "msg_contents": "> Could you show us the result of SELECT version(); ?\nof course I can \nPostgreSQL 8.3.7 on x86_64-unknown-linux-gnu, compiled by GCC gcc (SUSE\nLinux) 4.3.2 [gcc-4_3-branch revision 141291]\n> \n> Have you done any VACUUM VERBOSE lately and captured the output? If\n> so, what do the last few lines say? (That's a lot of relations for\n> the number of pages; just curious how it maps to actual.)\nIt need a lot of time (20 GB database), when I will have the answare\nI'll post it\n> \n> > enable_hashjoin = off\n> > enable_nestloop = off\n> > enable_seqscan = off\n> > enable_sort = off\n> \n> That's probably a bad idea. If particular queries aren't performing\n> well, you can always set these temporarily on a particular connection.\n> Even then, turning these off is rarely a good idea except for\n> diagnostic purposes. I *strongly* recommend you put all of these back\n> to the defaults of 'on' and start from there, turning off selected\n> items as needed to get EXPLAIN ANALYZE output to demonstrate the\n> better plans you've found for particular queries.\n\nOK, it will became the viceversa of what I'm doing now (set them to on\nand set them to off only on the appropriate connection instead of set\nthem to off and set them to on only on some appropriate connection).\nBut the question is: do you thing it is impossible to find a\nconfiguration that works fine for both the kind of query? The\napplication have to run even versus oracle db... i wont have to write a\ndifferent source for the two database...\n\n> \n> > effective_cache_size = 3600MB\n> \n> That seems a little on the low side for an 8GB machine, unless you\n> have other things on there using a lot of RAM. Do you?\nyes there are two instances of postgress running on the same server (the\ndatabase have to stay complitely separated).\n> \n> If you could set the optimizer options back on and get new plans where\n> you show specifically which options (if any) where turned off for the\n> run, that would be good. Also, please attach the plans to the email\n> instead of pasting -- the word wrap makes them hard to read. Finally,\n> if you could do \\d on the tables involved in the query, it would help.\n> I'll hold off looking at these in hopes that you can do the above.\n> \n> -Kevin\nI attach the explanation of the log query after setting all the enable\nto on. In this condition the query will never finish...", "msg_date": "Tue, 16 Jun 2009 17:31:56 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote: \n> Il giorno mar, 16/06/2009 alle 15.58 +0100, Matthew Wakeling ha\n> scritto:\n>> On Tue, 16 Jun 2009, Alberto Dalmaso wrote:\n \n>> > enable_hashjoin = off\n>> > enable_nestloop = off\n>> > enable_seqscan = off\n>> > enable_sort = off\n>> \n>> Why are these switched off?\n>> \n> because of the need to pump up the performance of the complex query.\n \nThese really are meant primarily for diagnostic purposes. As a last\nresort, you could set them off right before running a problem query,\nand set them back on again afterward; but you will be much better off\nif you can cure the underlying problem. The best chance of that is to\nshow us the plan you get with all turned on.\n \n-Kevin\n", "msg_date": "Tue, 16 Jun 2009 10:32:36 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote:\n \n> do you thing it is impossible to find a\n> configuration that works fine for both the kind of query?\n \nNo. We probably just need a little more information.\n \n> The application have to run even versus oracle db... i wont have to\n> write a different source for the two database...\n \nI understand completely.\n \n> I attach the explanation of the log query after setting all the\n> enable to on. In this condition the query will never finish...\n \nWe're getting close. Can you share the table structure and the actual\nquery you are running? It's a lot easier (for me, anyway) to put this\npuzzle together with all the pieces in hand.\n \nAlso, if you can set off some of the optimizer options and get a fast\nplan, please show us an EXPLAIN ANALYZE for that, with information on\nwhich settings were turned off. That will help show where bad\nestimates may be causing a problem, or possibly give a hint of table\nor index bloat problems.\n \nI think we're getting close to critical mass for seeing the\nsolution....\n \n-Kevin\n", "msg_date": "Tue, 16 Jun 2009 10:48:30 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "Il giorno mar, 16/06/2009 alle 11.31 -0400, Tom Lane ha scritto:\n> Alberto Dalmaso <[email protected]> writes:\n> > Il giorno mar, 16/06/2009 alle 15.58 +0100, Matthew Wakeling ha scritto:\n> >>> enable_hashjoin = off\n> >>> enable_nestloop = off\n> >>> enable_seqscan = off\n> >>> enable_sort = off\n> >> \n> >> Why are these switched off?\n> >> \n> > because of the need to pump up the performance of the complex query.\n> \n> That is *not* the way to improve performance of a query. Turning off\n> specific enable_ parameters can be helpful while investigating planner\n> behavior, but it is never recommended as a production solution. You\n> have already found out why.\n> \n> \t\t\tregards, tom lane\nOk, but the problem is that my very long query performes quite well when\nit works with merge join but it cannot arrive to an end if it use other\nkind of joining.\nIf i put all the parameter to on, as both of you tell me, in the\nexplanation I'll see that the db use nasted loop.\nIf i put to off nasted loop, it will use hash join.\nHow can I write the query so that the analyzer will use mergejoin (that\nis the only option that permit the query to give me the waited answare)\nwithout changing the settings every time on the connection?\n\n", "msg_date": "Tue, 16 Jun 2009 17:54:37 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote: \n \n> I attach the explanation of the log query after setting all the\n> enable to on. In this condition the query will never finish...\n \nI notice that you many joins in there. If the query can't be\nsimplified, you probably need to boost the join_collapse_limit and\nfrom_collapse_limit quite a bit. If planning time goes through the\nroof in that case, you may need to enable geqo -- this is what it's\nintended to help. If you try geqo, you may need to tune it; I'm not\nfamiliar with the knobs for tuning that, so maybe someone else will\njump in if you get to that point.\n \n-Kevin\n", "msg_date": "Tue, 16 Jun 2009 11:00:04 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> writes:\n> Ok, but the problem is that my very long query performes quite well when\n> it works with merge join but it cannot arrive to an end if it use other\n> kind of joining.\n> If i put all the parameter to on, as both of you tell me, in the\n> explanation I'll see that the db use nasted loop.\n> If i put to off nasted loop, it will use hash join.\n> How can I write the query so that the analyzer will use mergejoin (that\n> is the only option that permit the query to give me the waited answare)\n> without changing the settings every time on the connection?\n\nYou have the wrong mindset completely. Instead of thinking \"how can I\nforce the planner to do it my way\", you need to be thinking \"why is the\nplanner guessing wrong about which is the best way to do it? And how\ncan I improve its guess?\"\n\nThere's not really enough information in what you've posted so far to\nlet people help you with that question, but one thing that strikes me\nfrom the EXPLAIN is that you have a remarkably large number of joins.\nPerhaps increasing from_collapse_limit and/or join_collapse_limit\n(to more than the number of tables in the query) would help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jun 2009 12:12:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query " }, { "msg_contents": "Unfortunatly the query need that level of complxity as the information I\nhave to show are spread around different table.\nI have tryed the geqo on at the beginning but only with the default\nparameters.\nTomorrow (my working day here in Italy is finished some minutes ago, so\nI will wait for the end of the explain analyze and the go home ;-P )\nI'll try to increase, as you suggest, join_collapse_limit and\nfrom_collapse_limit.\nIf someone can give me some information on how to configure geqo, I'll\ntry it again.\nIn the meantime this night I leave the vacuum verbose to work for me.\n\n", "msg_date": "Tue, 16 Jun 2009 18:12:58 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" }, { "msg_contents": "Good afternoon.\n\nI have developed an application to efficiently schedule chemotherapy\npatients at our hospital. The application takes into account several\nresource constraints (available chairs, available nurses, nurse coverage\nassignment to chairs) as well as the chair time and nursing time\nrequired for a regimen.\n\nThe algorithm for packing appointments in respects each constraint and\ntypically schedules a day of treatments (30-60) within 9-10 seconds on\nmy workstation, down from 27 seconds initially. I would like to get it\nbelow 5 seconds if possible.\n\nI think what's slowing is down is simply the number of rows and joins.\nThe algorithm creates a scheduling matrix with one row per 5 minute\ntimeslot, per unit, per nurse assigned to the unit. That translates to\n3,280 rows for the days I have designed in development (each day can\nchange). \n\nTo determine the available slots, the algorithm finds the earliest slot\nthat has an available chair and a count of the required concurrent\nintervals afterwards. So a 60 minute regimen requires 12 concurrent\nrows. This is accomplished by joining the table on itself. A second\nquery is ran for the same range, but with respect to the nurse time and\nan available nurse. Finally, those two are joined against each other.\nEffectively, it is:\n\nSelect *\n>From (\n\tSelect *\n\tFrom matrix m1, matrix m2\n\tWhere m1.xxxxx = m2.xxxxx\n\t) chair,\n\t(\n\tSelect *\n\tFrom matrix m1, matrix m2\n\tWhere m1.xxxxx = m2.xxxxx\n\t) nurse\nWhere chair.id = nurse.id\n\nWith matrix having 3,280 rows. Ugh.\n\nI have tried various indexes and clustering approachs with little\nsuccess. Any ideas?\n\nThanks,\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294 \n\n", "msg_date": "Tue, 16 Jun 2009 14:35:23 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": false, "msg_subject": "Speeding up a query." }, { "msg_contents": "On the DB side of things, you will want to make sure that your caching\nas much as possible - putting a front-end like memcached could help. I\nassume you have indexes on the appropriate tables? What does the\nEXPLAIN ANALYZE on that query look like?\n\nNot necessarily a \"postgres\" solution, but I'd think this type of\nsolution would work really, really well inside of say a a mixed integer\nor integer solver ... something like glpk or cbc. You'd need to\nreformulate the problem, but we've built applications using these tools\nwhich can crunch through multiple billions of combinations in under 1 or\n2 seconds.\n\n(Of course, you still need to store the results, and feed the input,\nusing a database of some kind).\n\n\n--\nAnthony Presley\n\nOn Tue, 2009-06-16 at 14:35 -0400, Hartman, Matthew wrote:\n> Good afternoon.\n> \n> I have developed an application to efficiently schedule chemotherapy\n> patients at our hospital. The application takes into account several\n> resource constraints (available chairs, available nurses, nurse coverage\n> assignment to chairs) as well as the chair time and nursing time\n> required for a regimen.\n> \n> The algorithm for packing appointments in respects each constraint and\n> typically schedules a day of treatments (30-60) within 9-10 seconds on\n> my workstation, down from 27 seconds initially. I would like to get it\n> below 5 seconds if possible.\n> \n> I think what's slowing is down is simply the number of rows and joins.\n> The algorithm creates a scheduling matrix with one row per 5 minute\n> timeslot, per unit, per nurse assigned to the unit. That translates to\n> 3,280 rows for the days I have designed in development (each day can\n> change). \n> \n> To determine the available slots, the algorithm finds the earliest slot\n> that has an available chair and a count of the required concurrent\n> intervals afterwards. So a 60 minute regimen requires 12 concurrent\n> rows. This is accomplished by joining the table on itself. A second\n> query is ran for the same range, but with respect to the nurse time and\n> an available nurse. Finally, those two are joined against each other.\n> Effectively, it is:\n> \n> Select *\n> From (\n> \tSelect *\n> \tFrom matrix m1, matrix m2\n> \tWhere m1.xxxxx = m2.xxxxx\n> \t) chair,\n> \t(\n> \tSelect *\n> \tFrom matrix m1, matrix m2\n> \tWhere m1.xxxxx = m2.xxxxx\n> \t) nurse\n> Where chair.id = nurse.id\n> \n> With matrix having 3,280 rows. Ugh.\n> \n> I have tried various indexes and clustering approachs with little\n> success. Any ideas?\n> \n> Thanks,\n> \n> Matthew Hartman\n> Programmer/Analyst\n> Information Management, ICP\n> Kingston General Hospital\n> (613) 549-6666 x4294 \n> \n> \n\n", "msg_date": "Tue, 16 Jun 2009 14:36:45 -0500", "msg_from": "Anthony Presley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "Alberto Dalmaso wrote:\n[...]\n> in the explanation I'll see that the db use nasted loop.\n[...]\n\nSorry for the remark off topic, but I *love* the term\n\"nasted loop\". It should not go to oblivion unnoticed.\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 17 Jun 2009 09:16:59 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query (OT)" }, { "msg_contents": "Matthew Hartman wrote:\n> To determine the available slots, the algorithm finds the earliest slot\n> that has an available chair and a count of the required concurrent\n> intervals afterwards. So a 60 minute regimen requires 12 concurrent\n> rows. This is accomplished by joining the table on itself. A second\n> query is ran for the same range, but with respect to the nurse time and\n> an available nurse. Finally, those two are joined against each other.\n> Effectively, it is:\n> \n> Select *\n> From (\n> \tSelect *\n> \tFrom matrix m1, matrix m2\n> \tWhere m1.xxxxx = m2.xxxxx\n> \t) chair,\n> \t(\n> \tSelect *\n> \tFrom matrix m1, matrix m2\n> \tWhere m1.xxxxx = m2.xxxxx\n> \t) nurse\n> Where chair.id = nurse.id\n> \n> With matrix having 3,280 rows. Ugh.\n> \n> I have tried various indexes and clustering approachs with little\n> success. Any ideas?\n\nI don't understand your data model well enough to understand\nthe query, so I can only give you general hints (which you probably\nalready know):\n\n- Frequently the biggest performance gains can be reached by\n a (painful) redesign. Can ou change the table structure in a way\n that makes this query less expensive?\n\n- You have an index on matrix.xxxxx, right?\n\n- Can you reduce the row count of the two subqueries by adding\n additional conditions that weed out rows that can be excluded\n right away?\n\n- Maybe you can gain a little by changing the \"select *\" to\n \"select id\" in both subqueries and adding an additional join\n with matrix that adds the relevant columns in the end.\n I don't know the executor, so I don't know if that will help,\n but it would be a simple thing to test in an experiment.\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 17 Jun 2009 09:33:35 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "On Wed, Jun 17, 2009 at 8:33 AM, Albe Laurenz<[email protected]> wrote:\n\n>\n> I don't understand your data model well enough to understand\n> the query, so I can only give you general hints (which you probably\n> already know):\n\nHe is effectively joining same table 4 times in a for loop, to get\nresult, this is veeery ineffective.\nimagine:\nfor(x)\n for(x)\n for(x)\n for(x)\n..\n\nwhere X is number of rows in table matrix. not scarred yet ?\n\n-- \nGJ\n", "msg_date": "Wed, 17 Jun 2009 09:40:30 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "On Tue, Jun 16, 2009 at 2:35 PM, Hartman,\nMatthew<[email protected]> wrote:\n> Good afternoon.\n>\n> I have developed an application to efficiently schedule chemotherapy\n> patients at our hospital. The application takes into account several\n> resource constraints (available chairs, available nurses, nurse coverage\n> assignment to chairs) as well as the chair time and nursing time\n> required for a regimen.\n>\n> The algorithm for packing appointments in respects each constraint and\n> typically schedules a day of treatments (30-60) within 9-10 seconds on\n> my workstation, down from 27 seconds initially. I would like to get it\n> below 5 seconds if possible.\n>\n> I think what's slowing is down is simply the number of rows and joins.\n> The algorithm creates a scheduling matrix with one row per 5 minute\n> timeslot, per unit, per nurse assigned to the unit. That translates to\n> 3,280 rows for the days I have designed in development (each day can\n> change).\n>\n> To determine the available slots, the algorithm finds the earliest slot\n> that has an available chair and a count of the required concurrent\n> intervals afterwards. So a 60 minute regimen requires 12 concurrent\n> rows. This is accomplished by joining the table on itself. A second\n> query is ran for the same range, but with respect to the nurse time and\n> an available nurse. Finally, those two are joined against each other.\n> Effectively, it is:\n>\n> Select *\n> From   (\n>        Select *\n>        From matrix m1, matrix m2\n>        Where m1.xxxxx = m2.xxxxx\n>        ) chair,\n>        (\n>        Select *\n>        From matrix m1, matrix m2\n>        Where m1.xxxxx = m2.xxxxx\n>        ) nurse\n> Where chair.id = nurse.id\n>\n> With matrix having 3,280 rows. Ugh.\n>\n> I have tried various indexes and clustering approachs with little\n> success. Any ideas?\n\nhow far in advance do you schedule? As far as necessary?\n\nHow many chairs are there? How many nurses are there? This is a\ntricky (read: interesting) problem.\n\nmerlin\n", "msg_date": "Wed, 17 Jun 2009 09:08:33 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "Thanks for the replies everyone. I'll try to answer them all in this one email. I will send another email immediately after this with additional details about the query.\n\n> - Frequently the biggest performance gains can be reached by\n> a (painful) redesign. Can ou change the table structure in a way\n> that makes this query less expensive?\n\nI have considered redesigning the algorithm to accommodate this. As I've said, there's one row per five minute time slot. Instead, I could represent an interval of time with a row. For example, \"start_time\" of \"08:00\" with an \"end_time\" of \"12:00\" or perhaps an interval \"duration\" of \"4 hours\". The difficulty becomes in managing separate time requirements (nurse vs unit) for each time slot, and in inserting/updating new rows as pieces of those time slots or intervals are used up. Having a row per five minute interval avoids those complications so far. Still, I'd start with 32 rows and increase the number, never reaching 3,280.. :)\n\n> - You have an index on matrix.xxxxx, right?\n\nI have tried indexes on each common join criteria. Usually it's \"time,unit\", \"time,nurse\", or \"time,unit_scheduled\", \"time,nurse_scheduled\" (the later two being Booleans). In the first two cases it's made a difference of less than a second. In the last two, the time actually increases if I add \"analyze\" statements in after updates are made.\n\n> - Can you reduce the row count of the two subqueries by adding\n> additional conditions that weed out rows that can be excluded\n> right away?\n\nI use some additional conditions. I'll paste the meat of the query below.\n\n> - Maybe you can gain a little by changing the \"select *\" to\n> \"select id\" in both subqueries and adding an additional join\n> with matrix that adds the relevant columns in the end.\n> I don't know the executor, so I don't know if that will help,\n> but it would be a simple thing to test in an experiment.\n\nI wrote the \"select *\" as simplified, but really, it returns the primary key for that row.\n\n> how far in advance do you schedule? As far as necessary?\n\nIt's done on a per day basis, each day taking 8-12 seconds or so on my workstation. We typically schedule patients as much as three to six months in advance. The query already pulls data to a temporary table to avoid having to manage a massive number of rows.\n\n> How many chairs are there? How many nurses are there? This is a\n> tricky (read: interesting) problem.\n\nIn my current template there are 17 chairs and 7 nurses. Chairs are grouped into pods of 2-4 chairs. Nurses cover one to many pods, allowing for a primary nurse per pod as well as floater nurses that cover multiple pods.\n\n\n\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294 \n \n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: Wednesday, June 17, 2009 9:09 AM\nTo: Hartman, Matthew\nCc: [email protected]\nSubject: Re: [PERFORM] Speeding up a query.\n\nOn Tue, Jun 16, 2009 at 2:35 PM, Hartman,\nMatthew<[email protected]> wrote:\n> Good afternoon.\n>\n> I have developed an application to efficiently schedule chemotherapy\n> patients at our hospital. The application takes into account several\n> resource constraints (available chairs, available nurses, nurse coverage\n> assignment to chairs) as well as the chair time and nursing time\n> required for a regimen.\n>\n> The algorithm for packing appointments in respects each constraint and\n> typically schedules a day of treatments (30-60) within 9-10 seconds on\n> my workstation, down from 27 seconds initially. I would like to get it\n> below 5 seconds if possible.\n>\n> I think what's slowing is down is simply the number of rows and joins.\n> The algorithm creates a scheduling matrix with one row per 5 minute\n> timeslot, per unit, per nurse assigned to the unit. That translates to\n> 3,280 rows for the days I have designed in development (each day can\n> change).\n>\n> To determine the available slots, the algorithm finds the earliest slot\n> that has an available chair and a count of the required concurrent\n> intervals afterwards. So a 60 minute regimen requires 12 concurrent\n> rows. This is accomplished by joining the table on itself. A second\n> query is ran for the same range, but with respect to the nurse time and\n> an available nurse. Finally, those two are joined against each other.\n> Effectively, it is:\n>\n> Select *\n> From   (\n>        Select *\n>        From matrix m1, matrix m2\n>        Where m1.xxxxx = m2.xxxxx\n>        ) chair,\n>        (\n>        Select *\n>        From matrix m1, matrix m2\n>        Where m1.xxxxx = m2.xxxxx\n>        ) nurse\n> Where chair.id = nurse.id\n>\n> With matrix having 3,280 rows. Ugh.\n>\n> I have tried various indexes and clustering approachs with little\n> success. Any ideas?\n\nhow far in advance do you schedule? As far as necessary?\n\nHow many chairs are there? How many nurses are there? This is a\ntricky (read: interesting) problem.\n\nmerlin\n\n", "msg_date": "Wed, 17 Jun 2009 10:58:50 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "Sorry, I missed this reponse.\n\nI'm entirely new to PostgreSQL and have yet to figure out how to use\nEXPLAIN ANALYZE on a function. I think I realize where the problem is\nthough (the loop), I simply do not know how to fix it ;).\n\nGlpk and cbc, thanks, I'll look into those. You're right, the very\nnature of using a loop suggests that another tool might be more\nappropriate.\n\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294 \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Anthony\nPresley\nSent: Tuesday, June 16, 2009 3:37 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Speeding up a query.\n\nOn the DB side of things, you will want to make sure that your caching\nas much as possible - putting a front-end like memcached could help. I\nassume you have indexes on the appropriate tables? What does the\nEXPLAIN ANALYZE on that query look like?\n\nNot necessarily a \"postgres\" solution, but I'd think this type of\nsolution would work really, really well inside of say a a mixed integer\nor integer solver ... something like glpk or cbc. You'd need to\nreformulate the problem, but we've built applications using these tools\nwhich can crunch through multiple billions of combinations in under 1 or\n2 seconds.\n\n(Of course, you still need to store the results, and feed the input,\nusing a database of some kind).\n\n\n--\nAnthony Presley\n\nOn Tue, 2009-06-16 at 14:35 -0400, Hartman, Matthew wrote:\n> Good afternoon.\n> \n> I have developed an application to efficiently schedule chemotherapy\n> patients at our hospital. The application takes into account several\n> resource constraints (available chairs, available nurses, nurse\ncoverage\n> assignment to chairs) as well as the chair time and nursing time\n> required for a regimen.\n> \n> The algorithm for packing appointments in respects each constraint and\n> typically schedules a day of treatments (30-60) within 9-10 seconds on\n> my workstation, down from 27 seconds initially. I would like to get it\n> below 5 seconds if possible.\n> \n> I think what's slowing is down is simply the number of rows and joins.\n> The algorithm creates a scheduling matrix with one row per 5 minute\n> timeslot, per unit, per nurse assigned to the unit. That translates to\n> 3,280 rows for the days I have designed in development (each day can\n> change). \n> \n> To determine the available slots, the algorithm finds the earliest\nslot\n> that has an available chair and a count of the required concurrent\n> intervals afterwards. So a 60 minute regimen requires 12 concurrent\n> rows. This is accomplished by joining the table on itself. A second\n> query is ran for the same range, but with respect to the nurse time\nand\n> an available nurse. Finally, those two are joined against each other.\n> Effectively, it is:\n> \n> Select *\n> From (\n> \tSelect *\n> \tFrom matrix m1, matrix m2\n> \tWhere m1.xxxxx = m2.xxxxx\n> \t) chair,\n> \t(\n> \tSelect *\n> \tFrom matrix m1, matrix m2\n> \tWhere m1.xxxxx = m2.xxxxx\n> \t) nurse\n> Where chair.id = nurse.id\n> \n> With matrix having 3,280 rows. Ugh.\n> \n> I have tried various indexes and clustering approachs with little\n> success. Any ideas?\n> \n> Thanks,\n> \n> Matthew Hartman\n> Programmer/Analyst\n> Information Management, ICP\n> Kingston General Hospital\n> (613) 549-6666 x4294 \n> \n> \n\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 17 Jun 2009 11:13:14 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "\nOn Tue, 2009-06-16 at 14:35 -0400, Hartman, Matthew wrote:\n\n> The algorithm for packing appointments in respects each constraint and\n> typically schedules a day of treatments (30-60) within 9-10 seconds on\n> my workstation, down from 27 seconds initially. I would like to get it\n> below 5 seconds if possible.\n> \n> I think what's slowing is down is simply the number of rows and joins.\n> The algorithm creates a scheduling matrix with one row per 5 minute\n> timeslot, per unit, per nurse assigned to the unit. That translates to\n> 3,280 rows for the days I have designed in development (each day can\n> change). \n\nISTM the efficiency of your algorithm is geometrically related to the\nnumber of time slots into which appointments might fit. So reduce number\nof possible time slots...\n\nAssign the slot (randomly/hash/round robin) to either the morning or the\nafternoon and then run exactly same queries just with half number of\ntime slots. That should reduce your execution time by one quarter\nwithout using multiple CPUs for each morning/afternoon. Then run twice,\nonce for morning, once for afternoon.\n\nYou could parallelise this and run both at same time on different CPUs,\nif the extra work is worthwhile, but it seems not, judging from your\nrequirements.\n\nAnother way would be to arrange all appointments that need odd number of\ntimeslots into pairs so that you have at most one appointment that needs\nan odd number of timeslots. Then schedule appointments on 10 minute\nboundaries, rounding up their timeslot requirement. (The single odd\ntimeslot appointment will always waste 1 timeslot).\n\nHope that helps.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 07 Jul 2009 10:38:58 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "> From: Simon Riggs [mailto:[email protected]]\n> Sent: Tuesday, July 07, 2009 5:39 AM\n> \n> Another way would be to arrange all appointments that need odd number\nof\n> timeslots into pairs so that you have at most one appointment that\nneeds\n> an odd number of timeslots. Then schedule appointments on 10 minute\n> boundaries, rounding up their timeslot requirement. (The single odd\n> timeslot appointment will always waste 1 timeslot).\n\nNow THAT is an interesting idea. I'll have to play with this in my head\na bit (during really boring meetings) and get back to you. Thanks!\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n\n\n", "msg_date": "Tue, 7 Jul 2009 08:32:58 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." } ]
[ { "msg_contents": "Here's the query:\n\nselect photos.*\nfrom photos\ninner join event_participations on\n event_participations.user_id = photos.creator_id and\n event_participations.attend = true\ninner join event_instances on\n event_instances.id = event_participations.event_instance_id\nwhere (\n (event_instances.venue_id = 1290) and\n (photos.taken_at > (event_instances.time + interval '-3600 seconds')) and\n (photos.taken_at < (event_instances.time + interval '25200 seconds'))\n)\norder by taken_at desc\nlimit 20\n\nIt occasionally takes four minutes to run:\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..10997.65 rows=20 width=116) (actual\ntime=262614.474..262614.474 rows=0 loops=1)\n -> Nested Loop (cost=0.00..5729774.95 rows=10420 width=116)\n(actual time=262614.470..262614.470 rows=0 loops=1)\n Join Filter: ((photos.taken_at > (event_instances.\"time\" +\n'-01:00:00'::interval)) AND (photos.taken_at < (event_instances.\"time\"\n+ '07:00:00'::interval)))\n -> Nested Loop (cost=0.00..2055574.35 rows=11869630\nwidth=120) (actual time=21.750..121838.012 rows=14013998 loops=1)\n -> Index Scan Backward using photos_taken_at on photos\n (cost=0.00..40924.34 rows=544171 width=116) (actual\ntime=14.997..1357.724 rows=544171 loops=1)\n -> Index Scan using event_participations_user_id_index\non event_participations (cost=0.00..2.95 rows=60 width=8) (actual\ntime=0.007..0.159 rows=26 loops=544171)\n Index Cond: (event_participations.user_id =\nphotos.creator_id)\n Filter: event_participations.attend\n -> Index Scan using event_instances_pkey on event_instances\n(cost=0.00..0.29 rows=1 width=12) (actual time=0.008..0.008 rows=0\nloops=14013998)\n Index Cond: (event_instances.id =\nevent_participations.event_instance_id)\n Filter: (event_instances.venue_id = 1290)\n Total runtime: 262614.585 ms\n\nWith enable_nestloop to false, it takes about 1 second to run.\n\nDatabase is freshly analyzed and vacuumed. Default statistics target\nis 100. I have tried increasing the stats on\nevent_participations.user_id, event_participations.event_instance_id\nand photos.taken_at to 1000, but no improvement.\n\nThis is PostgreSQL 8.3.3.\n\nA.\n", "msg_date": "Tue, 16 Jun 2009 15:45:33 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": true, "msg_subject": "Yet another slow nested loop" }, { "msg_contents": "> -----Original Message-----\n> From: Alexander Staubo\n> \n> -> Nested Loop (cost=0.00..5729774.95 rows=10420 width=116)\n> (actual time=262614.470..262614.470 rows=0 loops=1)\n> Join Filter: ((photos.taken_at > (event_instances.\"time\" +\n> '-01:00:00'::interval)) AND (photos.taken_at < (event_instances.\"time\"\n> + '07:00:00'::interval)))\n> -> Nested Loop (cost=0.00..2055574.35 rows=11869630\n> width=120) (actual time=21.750..121838.012 rows=14013998 loops=1)\n\n\nDo you have any of the other enable_* options set to false? What do you\nhave random_page_cost set to? I ask because I'm surprised to see postgres\nchoose to loop when it knows it will have to loop 11 million times.\n\nDave\n\n\n", "msg_date": "Tue, 16 Jun 2009 08:56:38 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another slow nested loop" }, { "msg_contents": "On Tue, Jun 16, 2009 at 3:56 PM, Dave Dutcher<[email protected]> wrote:\n>> -----Original Message-----\n>> From: Alexander Staubo\n>>\n>>    ->  Nested Loop  (cost=0.00..5729774.95 rows=10420 width=116)\n>> (actual time=262614.470..262614.470 rows=0 loops=1)\n>>          Join Filter: ((photos.taken_at > (event_instances.\"time\" +\n>> '-01:00:00'::interval)) AND (photos.taken_at < (event_instances.\"time\"\n>> + '07:00:00'::interval)))\n>>          ->  Nested Loop  (cost=0.00..2055574.35 rows=11869630\n>> width=120) (actual time=21.750..121838.012 rows=14013998 loops=1)\n>\n>\n> Do you have any of the other enable_* options set to false?\n\nNo.\n\n> What do you\n> have random_page_cost set to?  I ask because I'm surprised to see postgres\n> choose to loop when it knows it will have to loop 11 million times.\n\nThe default, ie. 4.0.\n\nA.\n", "msg_date": "Tue, 16 Jun 2009 15:58:35 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Yet another slow nested loop" }, { "msg_contents": "Alexander Staubo <[email protected]> writes:\n> Here's the query:\n> select photos.*\n> from photos\n> inner join event_participations on\n> event_participations.user_id = photos.creator_id and\n> event_participations.attend = true\n> inner join event_instances on\n> event_instances.id = event_participations.event_instance_id\n> where (\n> (event_instances.venue_id = 1290) and\n> (photos.taken_at > (event_instances.time + interval '-3600 seconds')) and\n> (photos.taken_at < (event_instances.time + interval '25200 seconds'))\n> )\n> order by taken_at desc\n> limit 20\n\n> It occasionally takes four minutes to run:\n\nActually the easiest way to fix that is to get rid of the LIMIT.\n(Maybe use a cursor instead, and fetch only twenty rows.) LIMIT\nmagnifies the risks from any estimation error, and you've got a lot\nof that here ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Jun 2009 10:36:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another slow nested loop " }, { "msg_contents": "On Tue, Jun 16, 2009 at 4:36 PM, Tom Lane<[email protected]> wrote:\n> Actually the easiest way to fix that is to get rid of the LIMIT.\n> (Maybe use a cursor instead, and fetch only twenty rows.)  LIMIT\n> magnifies the risks from any estimation error, and you've got a lot\n> of that here ...\n\nThere's no cursor support in ActiveRecord, the ORM library we use, and\nI'm not going to write it. Anyway, I would prefer not to gloss over\nthe underlying problem with something that requires a \"TODO\" next to\nit. What can be done to fix the underlying problem? Nothing?\n\nA.\n", "msg_date": "Tue, 16 Jun 2009 17:16:35 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Yet another slow nested loop" }, { "msg_contents": "On Tue, Jun 16, 2009 at 11:16 AM, Alexander Staubo<[email protected]> wrote:\n> On Tue, Jun 16, 2009 at 4:36 PM, Tom Lane<[email protected]> wrote:\n>> Actually the easiest way to fix that is to get rid of the LIMIT.\n>> (Maybe use a cursor instead, and fetch only twenty rows.)  LIMIT\n>> magnifies the risks from any estimation error, and you've got a lot\n>> of that here ...\n>\n> There's no cursor support in ActiveRecord, the ORM library we use, and\n> I'm not going to write it. Anyway, I would prefer not to gloss over\n> the underlying problem with something that requires a \"TODO\" next to\n> it. What can be done to fix the underlying problem? Nothing?\n\nBasically, we need a system that can accurately estimate multi-column\nselectivity, or else some kind of planner hints.\n\nhttp://archives.postgresql.org/pgsql-performance/2009-06/msg00023.php\nhttp://archives.postgresql.org/pgsql-performance/2009-06/msg00119.php\n\n(with apologies for linking to my own posts, but you can go back and\nread the whole thread if you're interested)\n\n...Robert\n", "msg_date": "Wed, 17 Jun 2009 00:59:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another slow nested loop" } ]
[ { "msg_contents": "Even if the query end in aproximately 200 sec, the explain analyze is\nstill working and there are gone more than 1000 sec...\nI leave it working this night.\nHave a nice evening and thenks for the help.\n\n", "msg_date": "Tue, 16 Jun 2009 18:37:44 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" } ]
[ { "msg_contents": "\n\n", "msg_date": "Tue, 16 Jun 2009 17:01:35 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance discrepancy " } ]
[ { "msg_contents": "I'm trying to figure out how to optimize this query (yes, I ran vacuum/analyze):\n\nmusecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid\nNOT IN (SELECT pcap_storeid FROM muapp.pcap_store_log);\n QUERY PLAN\n------------------------------------------------------------------------------------\n Seq Scan on pcap_store (cost=4008.22..348521303.54 rows=106532 width=6)\n Filter: (NOT (subplan))\n SubPlan\n -> Materialize (cost=4008.22..6765.98 rows=205475 width=4)\n -> Seq Scan on pcap_store_log (cost=0.00..3099.75\nrows=205475 width=4)\n(5 rows)\n\nmusecurity=# \\d muapp.pcap_store\n Table \"muapp.pcap_store\"\n Column | Type |\n Modifiers\n-------------------+------------------------+-------------------------------------------------------------------------\n pcap_storeid | integer | not null default\nnextval('muapp.pcap_store_pcap_storeid_seq'::regclass)\n filename | character varying(255) |\n test_run_dutid | integer | default 0\n userid | integer | not null default 0\n analysis_recordid | bigint |\n io_xml | character varying(255) |\nIndexes:\n \"pcap_store_pkey\" PRIMARY KEY, btree (pcap_storeid)\nForeign-key constraints:\n \"pcap_store_analysis_recordid_fkey\" FOREIGN KEY\n(analysis_recordid) REFERENCES muapp.analysis(recordid) ON DELETE\nCASCADE\n \"pcap_store_test_run_dutid_fkey\" FOREIGN KEY (test_run_dutid)\nREFERENCES muapp.test_run_dut(test_run_dutid) ON DELETE CASCADE\n \"pcap_store_userid_fkey\" FOREIGN KEY (userid) REFERENCES\nmucore.\"user\"(recordid) ON DELETE CASCADE\n\nAs you see, the sequence scan on pcap_store is killing me, even though\nthere appears to be a perfectly good index. Is there a better way\nconstruct this query?\n\nThanks,\nAaron\n\n-- \nAaron Turner\nhttp://synfin.net/\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little temporary\nSafety, deserve neither Liberty nor Safety.\n -- Benjamin Franklin\n", "msg_date": "Tue, 16 Jun 2009 14:28:19 -0700", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "High cost of ... where ... not in (select ...)" }, { "msg_contents": "Aaron Turner escribi�:\n> I'm trying to figure out how to optimize this query (yes, I ran vacuum/analyze):\n> \n> musecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid\n> NOT IN (SELECT pcap_storeid FROM muapp.pcap_store_log);\n\nWhat PG version is this?\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 16 Jun 2009 17:37:50 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High cost of ... where ... not in (select ...)" }, { "msg_contents": "On Tue, Jun 16, 2009 at 2:37 PM, Alvaro\nHerrera<[email protected]> wrote:\n> Aaron Turner escribió:\n>> I'm trying to figure out how to optimize this query (yes, I ran vacuum/analyze):\n>>\n>> musecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid\n>> NOT IN (SELECT pcap_storeid FROM muapp.pcap_store_log);\n>\n> What PG version is this?\n\nDoh, just realized I didn't reply back to list. It's version 8.3.3.\n\nAlso, pcap_storeid is unique in pcap_store_log\n\n\n-- \nAaron Turner\nhttp://synfin.net/\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little temporary\nSafety, deserve neither Liberty nor Safety.\n -- Benjamin Franklin\n", "msg_date": "Tue, 16 Jun 2009 16:39:15 -0700", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High cost of ... where ... not in (select ...)" }, { "msg_contents": "On Tue, Jun 16, 2009 at 7:39 PM, Aaron Turner<[email protected]> wrote:\n> On Tue, Jun 16, 2009 at 2:37 PM, Alvaro\n> Herrera<[email protected]> wrote:\n>> Aaron Turner escribió:\n>>> I'm trying to figure out how to optimize this query (yes, I ran vacuum/analyze):\n>>>\n>>> musecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid\n>>> NOT IN (SELECT pcap_storeid FROM muapp.pcap_store_log);\n>>\n>> What PG version is this?\n>\n> Doh, just realized I didn't reply back to list.   It's version 8.3.3.\n>\n> Also, pcap_storeid is unique in pcap_store_log\n\nSpeaking as one who has dealt with this frustration more than once,\nyou can typically get better performance with something like:\n\nDELETE FROM muapp.pcap_store AS x\nFROM muapp.pcap_store a\nLEFT JOIN muapp.pcap_store_log b ON a.pcap_store_id = b.pcap_storeid\nWHERE x.pcap_storeid = a.pcap_storeid AND b.pcap_storeid IS NULL\n\nThis is emphatically lame, but there you have it. It's first of all\nlame that we can't do a better job optimizing NOT-IN, at least when\nthe expression within the subselect is known to be not-null, and it's\nsecondly lame that the syntax of DELETE doesn't permit a LEFT JOIN\nwithout a self-JOIN.\n\n</rant>\n\n...Robert\n", "msg_date": "Tue, 16 Jun 2009 20:30:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High cost of ... where ... not in (select ...)" }, { "msg_contents": "On Tue, Jun 16, 2009 at 5:30 PM, Robert Haas<[email protected]> wrote:\n> On Tue, Jun 16, 2009 at 7:39 PM, Aaron Turner<[email protected]> wrote:\n>> On Tue, Jun 16, 2009 at 2:37 PM, Alvaro\n>> Herrera<[email protected]> wrote:\n>>> Aaron Turner escribió:\n>>>> I'm trying to figure out how to optimize this query (yes, I ran vacuum/analyze):\n>>>>\n>>>> musecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid\n>>>> NOT IN (SELECT pcap_storeid FROM muapp.pcap_store_log);\n>>>\n>>> What PG version is this?\n>>\n>> Doh, just realized I didn't reply back to list.   It's version 8.3.3.\n>>\n>> Also, pcap_storeid is unique in pcap_store_log\n>\n> Speaking as one who has dealt with this frustration more than once,\n> you can typically get better performance with something like:\n>\n> DELETE FROM muapp.pcap_store AS x\n> FROM muapp.pcap_store a\n> LEFT JOIN muapp.pcap_store_log b ON a.pcap_store_id = b.pcap_storeid\n> WHERE x.pcap_storeid = a.pcap_storeid AND b.pcap_storeid IS NULL\n\nThat's a syntax error on 8.3.3... I don't see anywhere in the docs\nwhere the delete command allows for multiple FROM statements. Perhaps\nyou meant:\n\n DELETE FROM muapp.pcap_store AS x\n USING muapp.pcap_store AS a\n LEFT JOIN muapp.pcap_store_log b ON a.pcap_storeid =\nb.pcap_storeid WHERE x.pcap_storeid = a.pcap_storeid AND\nb.pcap_storeid IS NULL;\n\nIs that right?\n\n> This is emphatically lame, but there you have it.  It's first of all\n> lame that we can't do a better job optimizing NOT-IN, at least when\n> the expression within the subselect is known to be not-null, and it's\n> secondly lame that the syntax of DELETE doesn't permit a LEFT JOIN\n> without a self-JOIN.\n\nWow, glad I asked... I never would of figured that out.\n\n-- \nAaron Turner\nhttp://synfin.net/\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little temporary\nSafety, deserve neither Liberty nor Safety.\n -- Benjamin Franklin\n", "msg_date": "Tue, 16 Jun 2009 18:23:45 -0700", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High cost of ... where ... not in (select ...)" }, { "msg_contents": "On Tue, Jun 16, 2009 at 9:23 PM, Aaron Turner<[email protected]> wrote:\n> On Tue, Jun 16, 2009 at 5:30 PM, Robert Haas<[email protected]> wrote:\n>> On Tue, Jun 16, 2009 at 7:39 PM, Aaron Turner<[email protected]> wrote:\n>>> On Tue, Jun 16, 2009 at 2:37 PM, Alvaro\n>>> Herrera<[email protected]> wrote:\n>>>> Aaron Turner escribió:\n>>>>> I'm trying to figure out how to optimize this query (yes, I ran vacuum/analyze):\n>>>>>\n>>>>> musecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid\n>>>>> NOT IN (SELECT pcap_storeid FROM muapp.pcap_store_log);\n>>>>\n>>>> What PG version is this?\n>>>\n>>> Doh, just realized I didn't reply back to list.   It's version 8.3.3.\n>>>\n>>> Also, pcap_storeid is unique in pcap_store_log\n>>\n>> Speaking as one who has dealt with this frustration more than once,\n>> you can typically get better performance with something like:\n>>\n>> DELETE FROM muapp.pcap_store AS x\n>> FROM muapp.pcap_store a\n>> LEFT JOIN muapp.pcap_store_log b ON a.pcap_store_id = b.pcap_storeid\n>> WHERE x.pcap_storeid = a.pcap_storeid AND b.pcap_storeid IS NULL\n>\n> That's a syntax error on 8.3.3... I don't see anywhere in the docs\n> where the delete command allows for multiple FROM statements.  Perhaps\n> you meant:\n>\n>  DELETE FROM muapp.pcap_store AS x\n>        USING muapp.pcap_store AS a\n>        LEFT JOIN muapp.pcap_store_log b ON a.pcap_storeid =\n> b.pcap_storeid WHERE x.pcap_storeid = a.pcap_storeid AND\n> b.pcap_storeid IS NULL;\n>\n> Is that right?\n\nWoops, yes, I think that's it.\n\n(but I don't guarantee that it won't blow up your entire universe, so\ntest it carefully first)\n\n...Robert\n", "msg_date": "Tue, 16 Jun 2009 21:36:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High cost of ... where ... not in (select ...)" }, { "msg_contents": "On Tue, Jun 16, 2009 at 6:36 PM, Robert Haas<[email protected]> wrote:\n> On Tue, Jun 16, 2009 at 9:23 PM, Aaron Turner<[email protected]> wrote:\n\n>>  DELETE FROM muapp.pcap_store AS x\n>>        USING muapp.pcap_store AS a\n>>        LEFT JOIN muapp.pcap_store_log b ON a.pcap_storeid =\n>> b.pcap_storeid WHERE x.pcap_storeid = a.pcap_storeid AND\n>> b.pcap_storeid IS NULL;\n>>\n>> Is that right?\n>\n> Woops, yes, I think that's it.\n>\n> (but I don't guarantee that it won't blow up your entire universe, so\n> test it carefully first)\n\nYeah, doing that now... taking a bit longer then I expected (took\n~5min on rather slow hardware- everything is on a pair of 10K RAID1\ndrives), but the result seems correct.\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Hash Join (cost=19229.08..29478.99 rows=106492 width=6)\n Hash Cond: (x.pcap_storeid = a.pcap_storeid)\n -> Seq Scan on pcap_store x (cost=0.00..5617.84 rows=212984 width=10)\n -> Hash (cost=17533.93..17533.93 rows=106492 width=4)\n -> Hash Left Join (cost=6371.19..17533.93 rows=106492 width=4)\n Hash Cond: (a.pcap_storeid = b.pcap_storeid)\n Filter: (b.pcap_storeid IS NULL)\n -> Seq Scan on pcap_store a (cost=0.00..5617.84\nrows=212984 width=4)\n -> Hash (cost=3099.75..3099.75 rows=205475 width=4)\n -> Seq Scan on pcap_store_log b\n(cost=0.00..3099.75 rows=205475 width=4)\n\nI know the costs are just relative, but I assumed\ncost=19229.08..29478.99 isn't 5 minutes of effort even on crappy\nhardware. Honestly, not complaining, 5 minutes is acceptable for this\nquery (it's a one time thing) just surprised is all.\n\nThanks for the help!\n\n-- \nAaron Turner\nhttp://synfin.net/\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little temporary\nSafety, deserve neither Liberty nor Safety.\n -- Benjamin Franklin\n", "msg_date": "Tue, 16 Jun 2009 18:46:48 -0700", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High cost of ... where ... not in (select ...)" }, { "msg_contents": "On Tue, Jun 16, 2009 at 6:36 PM, Robert Haas<[email protected]> wrote:\n> On Tue, Jun 16, 2009 at 9:23 PM, Aaron Turner<[email protected]> wrote:\n\n>>  DELETE FROM muapp.pcap_store AS x\n>>        USING muapp.pcap_store AS a\n>>        LEFT JOIN muapp.pcap_store_log b ON a.pcap_storeid =\n>> b.pcap_storeid WHERE x.pcap_storeid = a.pcap_storeid AND\n>> b.pcap_storeid IS NULL;\n>>\n>> Is that right?\n>\n> Woops, yes, I think that's it.\n>\n> (but I don't guarantee that it won't blow up your entire universe, so\n> test it carefully first)\n\nYeah, doing that now... taking a bit longer then I expected (took\n~5min on rather slow hardware- everything is on a pair of 10K RAID1\ndrives), but the result seems correct.\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Hash Join (cost=19229.08..29478.99 rows=106492 width=6)\n Hash Cond: (x.pcap_storeid = a.pcap_storeid)\n -> Seq Scan on pcap_store x (cost=0.00..5617.84 rows=212984 width=10)\n -> Hash (cost=17533.93..17533.93 rows=106492 width=4)\n -> Hash Left Join (cost=6371.19..17533.93 rows=106492 width=4)\n Hash Cond: (a.pcap_storeid = b.pcap_storeid)\n Filter: (b.pcap_storeid IS NULL)\n -> Seq Scan on pcap_store a (cost=0.00..5617.84\nrows=212984 width=4)\n -> Hash (cost=3099.75..3099.75 rows=205475 width=4)\n -> Seq Scan on pcap_store_log b\n(cost=0.00..3099.75 rows=205475 width=4)\n\nI know the costs are just relative, but I assumed\ncost=19229.08..29478.99 isn't 5 minutes of effort even on crappy\nhardware. Honestly, not complaining, 5 minutes is acceptable for this\nquery (it's a one time thing) just surprised is all.\n\nThanks for the help!\n\n-- \nAaron Turner\nhttp://synfin.net/\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little temporary\nSafety, deserve neither Liberty nor Safety.\n -- Benjamin Franklin\n", "msg_date": "Tue, 16 Jun 2009 18:47:20 -0700", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High cost of ... where ... not in (select ...)" }, { "msg_contents": "Aaron Turner <[email protected]> writes:\n> I know the costs are just relative, but I assumed\n> cost=19229.08..29478.99 isn't 5 minutes of effort even on crappy\n> hardware.\n\nVery likely the bulk of the time is spent in the DELETE work proper,\nnot in the query to find the rows to be deleted. In particular I wonder\nif you have an unindexed foreign key referencing this table ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 00:06:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High cost of ... where ... not in (select ...) " } ]
[ { "msg_contents": "Hi,\n\nI have been trying to fix a performance issue that we have which I have \ntracked down to index scans being done on a particular table (or set of \ntables):\n\nThe following query:\nexplain analyze select *\nFROM inbound.event_20090526 e\nLEFT OUTER JOIN inbound.internal_host i ON (e.mta_host_id = i.id)\nLEFT OUTER JOIN inbound.internal_host iaa ON (e.aamta_host_id = iaa.id)\nLEFT OUTER JOIN inbound.event_status es ON (e.event_status_id = es.id)\nLEFT OUTER JOIN inbound.threat t ON (e.threat_id = t.id), inbound.domain \nd, inbound.event_type et\nWHERE e.domain_id = d.id\nAND e.event_type_id = et.id\nAND d.name IN (\n 'testdomain.com'\n);\n\n\nDoes this:\n \nQUERY \nPLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..10887.03 rows=8 width=2021) (actual \ntime=50.352..14378.603 rows=3453 loops=1)\n -> Nested Loop Left Join (cost=0.00..10882.23 rows=8 width=1985) \n(actual time=50.346..14372.820 rows=3453 loops=1)\n -> Nested Loop Left Join (cost=0.00..10877.43 rows=8 \nwidth=1949) (actual time=50.336..14358.101 rows=3453 loops=1)\n -> Nested Loop Left Join (cost=0.00..10872.63 rows=8 \nwidth=1801) (actual time=50.321..14344.603 rows=3453 loops=1)\n -> Nested Loop (cost=0.00..10867.83 rows=8 \nwidth=1764) (actual time=50.315..14336.979 rows=3453 loops=1)\n -> Nested Loop (cost=0.00..10863.03 rows=8 \nwidth=1728) (actual time=50.288..14308.368 rows=3453 loops=1)\n -> Index Scan using domain_name_idx on \ndomain d (cost=0.00..6.63 rows=1 width=452) (actual time=0.049..0.080 \nrows=1 loops=1)\n Index Cond: ((name)::text = \n'testdomain.com'::text)\n -> Index Scan using \nevent_20090526_domain_idx on event_20090526 e (cost=0.00..10694.13 \nrows=3606 width=1276) (actual time=50.233..14305.211 rows=3453 loops=1)\n Index Cond: (e.domain_id = d.id)\n -> Index Scan using event_type_pkey on \nevent_type et (cost=0.00..0.56 rows=1 width=36) (actual \ntime=0.006..0.006 rows=1 loops=3453)\n Index Cond: (et.id = e.event_type_id)\n -> Index Scan using threat_pkey on threat t \n(cost=0.00..0.56 rows=1 width=37) (actual time=0.000..0.000 rows=0 \nloops=3453)\n Index Cond: (e.threat_id = t.id)\n -> Index Scan using event_status_pkey on event_status \nes (cost=0.00..0.56 rows=1 width=148) (actual time=0.002..0.002 rows=1 \nloops=3453)\n Index Cond: (e.event_status_id = es.id)\n -> Index Scan using internal_host_pkey on internal_host iaa \n(cost=0.00..0.56 rows=1 width=36) (actual time=0.002..0.003 rows=1 \nloops=3453)\n Index Cond: (e.aamta_host_id = iaa.id)\n -> Index Scan using internal_host_pkey on internal_host i \n(cost=0.00..0.56 rows=1 width=36) (actual time=0.000..0.000 rows=0 \nloops=3453)\n Index Cond: (e.mta_host_id = i.id)\n Total runtime: 14380.000 ms\n\nIf the same query is done straight away again we get:\n \nQUERY \nPLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..10887.03 rows=8 width=2021) (actual \ntime=0.165..67.388 rows=3453 loops=1)\n -> Nested Loop Left Join (cost=0.00..10882.23 rows=8 width=1985) \n(actual time=0.162..61.973 rows=3453 loops=1)\n -> Nested Loop Left Join (cost=0.00..10877.43 rows=8 \nwidth=1949) (actual time=0.156..49.756 rows=3453 loops=1)\n -> Nested Loop Left Join (cost=0.00..10872.63 rows=8 \nwidth=1801) (actual time=0.148..37.522 rows=3453 loops=1)\n -> Nested Loop (cost=0.00..10867.83 rows=8 \nwidth=1764) (actual time=0.146..31.920 rows=3453 loops=1)\n -> Nested Loop (cost=0.00..10863.03 rows=8 \nwidth=1728) (actual time=0.129..10.325 rows=3453 loops=1)\n -> Index Scan using domain_name_idx on \ndomain d (cost=0.00..6.63 rows=1 width=452) (actual time=0.099..0.139 \nrows=1 loops=1)\n Index Cond: ((name)::text = \n'rhe.com.au'::text)\n -> Index Scan using \nevent_20090526_domain_idx on event_20090526 e (cost=0.00..10694.13 \nrows=3606 width=1276) (actual time=0.027..7.510 rows=3453 loops=1)\n Index Cond: (e.domain_id = d.id)\n -> Index Scan using event_type_pkey on \nevent_type et (cost=0.00..0.56 rows=1 width=36) (actual \ntime=0.004..0.005 rows=1 loops=3453)\n Index Cond: (et.id = e.event_type_id)\n -> Index Scan using threat_pkey on threat t \n(cost=0.00..0.56 rows=1 width=37) (actual time=0.000..0.000 rows=0 \nloops=3453)\n Index Cond: (e.threat_id = t.id)\n -> Index Scan using event_status_pkey on event_status \nes (cost=0.00..0.56 rows=1 width=148) (actual time=0.002..0.002 rows=1 \nloops=3453)\n Index Cond: (e.event_status_id = es.id)\n -> Index Scan using internal_host_pkey on internal_host iaa \n(cost=0.00..0.56 rows=1 width=36) (actual time=0.002..0.002 rows=1 \nloops=3453)\n Index Cond: (e.aamta_host_id = iaa.id)\n -> Index Scan using internal_host_pkey on internal_host i \n(cost=0.00..0.56 rows=1 width=36) (actual time=0.000..0.000 rows=0 \nloops=3453)\n Index Cond: (e.mta_host_id = i.id)\n Total runtime: 68.475 ms\n\nWhich suggests to me that it takes the remainder of the 14300ms in the \nfirst query to read the event_20090526_domain_idx index in from disk.\n\nThe table has 2 million records in it, and the index has a physical size \non disk of 44MB. The box this is running on is rather powerful having 8 \ncores, 16GB ram, and stripped disks.\n\nWhat I am wondering is is there any reason why this would be taking so \nlong? Or what can be done to fix this / track down the issue more?\n\nCheers\nBryce\n", "msg_date": "Wed, 17 Jun 2009 15:30:03 +1200", "msg_from": "Bryce Ewing <[email protected]>", "msg_from_op": true, "msg_subject": "Index Scan taking long time" }, { "msg_contents": "On Tue, Jun 16, 2009 at 9:30 PM, Bryce Ewing<[email protected]> wrote:\n> Hi,\n>\n> I have been trying to fix a performance issue that we have which I have\n> tracked down to index scans being done on a particular table (or set of\n> tables):\n>\n> The following query:\n> explain analyze select *\n> FROM inbound.event_20090526 e\n> LEFT OUTER JOIN inbound.internal_host i ON (e.mta_host_id = i.id)\n> LEFT OUTER JOIN inbound.internal_host iaa ON (e.aamta_host_id = iaa.id)\n> LEFT OUTER JOIN inbound.event_status es ON (e.event_status_id = es.id)\n> LEFT OUTER JOIN inbound.threat t ON (e.threat_id = t.id), inbound.domain d,\n> inbound.event_type et\n> WHERE e.domain_id = d.id\n> AND e.event_type_id = et.id\n> AND d.name IN (\n>   'testdomain.com'\n> );\n\nWithout looking at the explain just yet, it seems to me that you are\nconstraining the order of joins to insist that the left joins be done\nfirst, then the regular joins second, because of your mix of explicit\nand implicit join syntax. The query planner is constrained to run\nexplicit joins first, then implicit if I remember correctly. So,\nmaking it all explicit might help. Might not. But it's a thought\n", "msg_date": "Tue, 16 Jun 2009 23:12:48 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Scan taking long time" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> Without looking at the explain just yet, it seems to me that you are\n> constraining the order of joins to insist that the left joins be done\n> first, then the regular joins second, because of your mix of explicit\n> and implicit join syntax. The query planner is constrained to run\n> explicit joins first, then implicit if I remember correctly.\n\nThat isn't true as of recent releases (8.2 and up, I think). It is true\nthat there are semantic constraints that prevent certain combinations\nof inner and outer joins from being rearranged ... but if that applies\nhere, it would also prevent manual rearrangement, unless the OP decides\nthat this query doesn't express quite what he meant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 10:23:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Scan taking long time " }, { "msg_contents": "The nested loops (which are due to the joins) don't seem to be part of \nthe problem at all. The main time that is taken (actual time that is) \nis in this part:\n Index Scan using event_20090526_domain_idx on event_20090526 e\n (cost=0.00..10694.13 rows=3606 width=1276) (actual \ntime=50.233..14305.211 rows=3453 loops=1)\n Index Cond: (e.domain_id = d.id)\n\nWhich is the leaf node in the query plan, the total time for the query \nbeing: Total runtime: 14380.000 ms\n\nAnd as I said once that query is run once it then does the same query \nplan and has this output for the same leaf node above:\n Index Scan using event_20090526_domain_idx on event_20090526 e\n (cost=0.00..10694.13 rows=3606 width=1276) (actual time=0.027..7.510 \nrows=3453 loops=1)\n Index Cond: (e.domain_id = d.id)\n\nSo it seems to me that once the index is in memory everything is fine \nwith the world, but the loading of the index into memory is horrendous.\n\n\nTom Lane wrote:\n> Scott Marlowe <[email protected]> writes:\n> \n>> Without looking at the explain just yet, it seems to me that you are\n>> constraining the order of joins to insist that the left joins be done\n>> first, then the regular joins second, because of your mix of explicit\n>> and implicit join syntax. The query planner is constrained to run\n>> explicit joins first, then implicit if I remember correctly.\n>> \n>\n> That isn't true as of recent releases (8.2 and up, I think). It is true\n> that there are semantic constraints that prevent certain combinations\n> of inner and outer joins from being rearranged ... but if that applies\n> here, it would also prevent manual rearrangement, unless the OP decides\n> that this query doesn't express quite what he meant.\n>\n> \t\t\tregards, tom lane\n> \n\n-- \n\n*Bryce Ewing *| Platform Architect\n*DDI:* +64 9 950 2195 *Fax:* +64 9 302 0518\n*Mobile:* +64 21 432 293 *Freephone:* 0800 SMX SMX (769 769)\nLevel 11, 290 Queen Street, Auckland, New Zealand | SMX Ltd | smx.co.nz \n<http://smx.co.nz>\nSMX | Business Email Specialists\nThe information contained in this email and any attachments is \nconfidential. If you are not\nthe intended recipient then you must not use, disseminate, distribute or \ncopy any information\ncontained in this email or any attachments. If you have received this \nemail in error or you\nare not the originally intended recipient please contact SMX immediately \nand destroy this email.\n\n", "msg_date": "Thu, 18 Jun 2009 09:44:20 +1200", "msg_from": "Bryce Ewing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index Scan taking long time" }, { "msg_contents": "Bryce Ewing <[email protected]> writes:\n> So it seems to me that once the index is in memory everything is fine \n> with the world, but the loading of the index into memory is horrendous.\n\nSo it would seem. What's the disk hardware on this machine?\n\nIt's possible that part of the problem is table bloat, leading to the\nindexscan having to fetch many more pages than it would if the table\nwere more compact.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 18:43:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Scan taking long time " }, { "msg_contents": "Hi Tom,\n\nWe have managed to improve significantly on the speed of this query. \nThe way that we did this was through clustering the table based on the \ndomain index which significantly reduced the page reads that were \nrequired in order to perform the query.\n\nAlso to find this we turned on log_statement_stats to see what it was doing.\n\nThis was on a table of roughly 600MB where the domains were randomly \ndispersed.\n\nCheers\nBryce\n\nTom Lane wrote:\n> Bryce Ewing <[email protected]> writes:\n> \n>> So it seems to me that once the index is in memory everything is fine \n>> with the world, but the loading of the index into memory is horrendous.\n>> \n>\n> So it would seem. What's the disk hardware on this machine?\n>\n> It's possible that part of the problem is table bloat, leading to the\n> indexscan having to fetch many more pages than it would if the table\n> were more compact.\n>\n> \t\t\tregards, tom lane\n>\n> \n\n", "msg_date": "Fri, 19 Jun 2009 10:11:25 +1200", "msg_from": "Bryce Ewing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index Scan taking long time" } ]
[ { "msg_contents": "Hi,\n\nIt seems that a COPY of 8M rows to a table to 8.4rc1 takes >30% longer than\nit does to 8.3.7 on Solaris.\n\nHere are the steps I've taken to reproduce this problem on two different\nsolaris boxes (Solaris 10 11/06 s10x_u3wos_10 X86 and Solaris 10 8/07\ns10x_u4wos_12b X86). I've tried this on a Linux box, and I do not see the\nproblem there.\n\n1. Run the following in psql client to generate a 8M row data file.\n\ncopy (select generate_series(1,8000000), ('1 second'::interval *\ngenerate_series(1,8000000) + '2007-01-01'::timestamp)) to\n'/export/home/alan/work/pgsql/dump.out' with csv;\n\n2. Build 8.3.7 and 8.4rc1 with the following config.\n\n./configure --prefix=`pwd`/../pgsql CC=/opt/SUNWspro/bin/cc CFLAGS=\"-xO3\n-xarch=native \\\n-xspace -W0,-Lt -W2,-Rcond_elim -Xa -xildoff -xc99=none -xCC\"\n--without-readline --with-includes=/opt/csw/include\n--with-libraries=/opt/csw/lib\n\n3. Run the following on each.\n\npg_ctl stop -D data -m fast\nrm -rf data\ninitdb -D data\ncat postgresql.conf >> data/postgresql.conf\npg_ctl start -l cq.log -D data -w\npsql -f ddl.sql postgres\ntime psql -c \"copy t from '/export/home/alan/work/pgsql/dump.out' with csv\"\npostgres\n\nHere are the numbers from several runs I've done.\n\n8.3.7 - Solaris 10 11/06 s10x_u3wos_10 X86\nreal 0m43.971s\nuser 0m0.002s\nsys 0m0.003s\nreal 0m44.042s\nuser 0m0.002s\nsys 0m0.003s\nreal 0m44.828s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m43.921s\nuser 0m0.002s\nsys 0m0.003s\n\n8.4rc1 - Solaris 10 11/06 s10x_u3wos_10 X86\nreal 1m0.041s\nuser 0m0.002s\nsys 0m0.003s\nreal 1m0.258s\nuser 0m0.002s\nsys 0m0.004s\nreal 1m0.173s\nuser 0m0.002s\nsys 0m0.003s\nreal 1m0.402s\nuser 0m0.002s\nsys 0m0.003s\nreal 1m0.767s\nuser 0m0.002s\nsys 0m0.003s\n\n8.3.7 - Solaris 10 8/07 s10x_u4wos_12b X86\nreal 0m36.242s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m37.206s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m38.431s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m38.885s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m38.177s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m38.332s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m38.401s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m36.817s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m39.505s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m38.871s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m38.939s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m38.823s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m37.955s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m39.196s\nuser 0m0.002s\nsys 0m0.004s\n\n8.4rc1 - Solaris 10 8/07 s10x_u4wos_12b X86\nreal 0m50.603s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m49.945s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m50.547s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m50.061s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m48.151s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m50.133s\nuser 0m0.002s\nsys 0m0.004s\nreal 0m50.583s\nuser 0m0.002s\nsys 0m0.004s\n\nHas anyone else seen this problem?\n\nThanks, Alan", "msg_date": "Tue, 16 Jun 2009 22:46:12 -0700", "msg_from": "Alan Li <[email protected]>", "msg_from_op": true, "msg_subject": "8.4 COPY performance regression on Solaris" }, { "msg_contents": "Alan Li wrote:\n> Hi,\n> \n> It seems that a COPY of 8M rows to a table to 8.4rc1 takes >30% longer \n> than it does to 8.3.7 on Solaris.\n> \n> Here are the steps I've taken to reproduce this problem on two different \n> solaris boxes (Solaris 10 11/06 s10x_u3wos_10 X86 and Solaris 10 8/07 \n> s10x_u4wos_12b X86). I've tried this on a Linux box, and I do not see \n> the problem there.\n\ntried that on my box (though I increased the testset size by 10x to get \nmore sensible runtimes) and I can reproduce that on Linux(CentoS \n5.3/x86_64, Nehalem Xeon E5530) as well. I get ~450000 rows/s on 8.3 and \nonly ~330000/s on 8.4\n\n\n\non 8.4 I get:\n\n3m59/4m01/3m56s runtime and a profile of\n\nsamples % symbol name\n636302 19.6577 XLogInsert\n415510 12.8366 CopyReadLine\n225347 6.9618 DoCopy\n131143 4.0515 ParseDateTime\n122043 3.7703 DecodeNumber\n81730 2.5249 DecodeDate\n81045 2.5038 DecodeDateTime\n80900 2.4993 pg_verify_mbstr_len\n80235 2.4787 pg_next_dst_boundary\n67571 2.0875 LWLockAcquire\n64548 1.9941 heap_insert\n64178 1.9827 LWLockRelease\n63609 1.9651 PageAddItem\n63402 1.9587 heap_form_tuple\n56544 1.7468 timestamp_in\n48697 1.5044 heap_fill_tuple\n45248 1.3979 pg_atoi\n42390 1.3096 IsSystemRelation\n41287 1.2755 BufferGetBlockNumber\n38936 1.2029 ValidateDate\n36619 1.1313 ExecStoreTuple\n35367 1.0926 DecodeTime\n\non 8.3.7 I get 2m58s,2m54s,2m55s\n\nand a profile of:\n\nsamples % symbol name\n460966 16.2924 XLogInsert\n307386 10.8643 CopyReadLine\n301745 10.6649 DoCopy\n153452 5.4236 pg_next_dst_boundary\n119757 4.2327 DecodeNumber\n105356 3.7237 heap_formtuple\n83456 2.9497 ParseDateTime\n83020 2.9343 pg_verify_mbstr_len\n72735 2.5708 DecodeDate\n70425 2.4891 LWLockAcquire\n65820 2.3264 LWLockRelease\n61823 2.1851 DecodeDateTime\n55895 1.9756 hash_any\n51305 1.8133 PageAddItem\n47440 1.6767 AllocSetAlloc\n47218 1.6689 heap_insert\n38912 1.3753 DecodeTime\n34871 1.2325 ReadBuffer_common\n34519 1.2200 date2j\n33093 1.1696 DetermineTimeZoneOffset\n31334 1.1075 MemoryContextAllocZero\n30951 1.0939 RelationGetBufferForTuple\n\nIf I do the same test utilizing WAL bypass the picture changes:\n\n8.3 runtimes:2m16,2min14s,2min22s\n\nand profile:\n\nsamples % symbol name\n445583 16.7777 CopyReadLine\n332772 12.5300 DoCopy\n156974 5.9106 pg_next_dst_boundary\n131952 4.9684 heap_formtuple\n119114 4.4850 DecodeNumber\n94340 3.5522 ParseDateTime\n81624 3.0734 pg_verify_mbstr_len\n75012 2.8245 DecodeDate\n74950 2.8221 DecodeDateTime\n64467 2.4274 hash_any\n62859 2.3669 PageAddItem\n62054 2.3365 LWLockAcquire\n57209 2.1541 LWLockRelease\n45812 1.7250 hash_search_with_hash_value\n41530 1.5637 DetermineTimeZoneOffset\n40790 1.5359 heap_insert\n39694 1.4946 AllocSetAlloc\n38855 1.4630 ReadBuffer_common\n36056 1.3576 MemoryContextAllocZero\n36030 1.3567 DecodeTime\n29057 1.0941 UnpinBuffer\n28291 1.0653 PinBuffer\n\n\n8.4 runtime: 2m1s,2m,1m59s\n\nand profile:\n404775 17.9532 CopyReadLine\n208482 9.2469 DoCopy\n148898 6.6042 ParseDateTime\n118645 5.2623 DecodeNumber\n80972 3.5914 DecodeDate\n79005 3.5042 pg_verify_mbstr_len\n73645 3.2664 PageAddItem\n72167 3.2009 DecodeDateTime\n65264 2.8947 heap_form_tuple\n52680 2.3365 timestamp_in\n46264 2.0520 pg_next_dst_boundary\n45819 2.0322 ExecStoreTuple\n45745 2.0290 heap_fill_tuple\n43690 1.9378 heap_insert\n38453 1.7055 InputFunctionCall\n37050 1.6433 LWLockAcquire\n36853 1.6346 BufferGetBlockNumber\n36428 1.6157 heap_compute_data_size\n33818 1.5000 DetermineTimeZoneOffset\n33468 1.4844 DecodeTime\n30896 1.3703 tm2timestamp\n30888 1.3700 GetCurrentTransactionId\n\n\nStefan\n", "msg_date": "Wed, 17 Jun 2009 09:42:31 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 COPY performance regression on Solaris" } ]
[ { "msg_contents": "Ok, here are the last rows for the vacuum analyze verbose\n\nINFO: free space map contains 154679 pages in 39 relations\nDETAIL: A total of 126176 page slots are in use (including overhead).\n126176 page slots are required to track all free space.\nCurrent limits are: 160000 page slots, 5000 relations, using 1476 kB.\nL'interrogazione è stata eseguita con successo, ma senza risultato, in\n1332269 ms.\n\n\nand I attach the complete explain analyze of the complex query.\nGiving more detail about the tables involved in the query could be not\nso easy as they are a lot.\nThe joins are made between columns that are primary key in a table and\nindexed in the other.\nAll the where clausole are on indexed colums (perhaps there are too many\nindexes...)\n\nThanks a lot.", "msg_date": "Wed, 17 Jun 2009 09:07:54 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote: \n> Ok, here are the last rows for the vacuum analyze verbose\n> \n> INFO: free space map contains 154679 pages in 39 relations\n> DETAIL: A total of 126176 page slots are in use (including\n> overhead).\n> 126176 page slots are required to track all free space.\n> Current limits are: 160000 page slots, 5000 relations, using 1476\n? kB.\n \nNo indication of bloat there. You could afford to free some RAM by\nreducing the max_fsm_relations setting. (You have 39 relations but\nare reserving RAM to keep track of free space in 5000 relations.)\n \n> and I attach the complete explain analyze of the complex query.\n \nI'll see what I can glean from that when I get some time.\n \n> All the where clausole are on indexed colums (perhaps there are too\n> many indexes...)\n \nThat's not usually a problem.\n \nThe other thing I was hoping to see, which I don't think you've sent,\nis an EXPLAIN ANALYZE of the same query with the settings which you\nhave found which cause it to pick a faster plan. As I understand it,\nthat runs pretty fast, so hopefully that's a quick one for you to\nproduce.\n \n-Kevin\n", "msg_date": "Wed, 17 Jun 2009 09:48:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "That what i send is the quick execution, with other parameters this\nquery simply doesn't come to an end.\nIt is the little query that changing the settings (using the default\nwith all the query analyzer on) becames really quick, while with this\nsettings (with some analyzer switched off) became very slow.\n\nI don't belleve: using this settings\n\nset enable_hashjoin = 'on';\nset enable_nestloop = 'on';\nset enable_seqscan = 'on';\nset enable_sort = 'on';\n\n\nset from_collapse_limit = 8;\nset join_collapse_limit = 3; \n\n\nselect * from v_fsa_2007_estrazione;\nfinnally end in acceptable time (156 sec)\nwhat does it mean using join_collapse_limit = 3 (that is really a lot of\nobject less that what i'm using in taht query).\n\nI'm executing an explain analyze in this new situation...\nIt is possible that such a configuration can create performance problem\non other queryes? (join_collapse_limit is set to a really low value)\n\nI'll made some test in this direction.\n\n", "msg_date": "Wed, 17 Jun 2009 17:11:21 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote: \n \n> what does it mean using join_collapse_limit = 3\n \nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n \n-Kevin\n", "msg_date": "Wed, 17 Jun 2009 11:42:54 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "P.S.: to understand what the query has to make (and 80% of the view hve\nthese to make): a lot of time is spend to pivoting a table with a\nstructure like\nidentifier, description_of_value, numeric value\nthat has to be transformed in\nidentifier, description_1, description_2, ..., description_n\nwhere n is not a fixed number (it changes in function of the type of\ncalculation that was used to generate the rows in the table).\n\nperhaps this information could help.\n\nthanks everybady\n\n", "msg_date": "Thu, 18 Jun 2009 10:02:08 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance with query" }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote: \n> P.S.: to understand what the query has to make (and 80% of the view\n> hve these to make): a lot of time is spend to pivoting a table with\n> a structure like\n> identifier, description_of_value, numeric value\n> that has to be transformed in\n> identifier, description_1, description_2, ..., description_n\n> where n is not a fixed number (it changes in function of the type of\n> calculation that was used to generate the rows in the table).\n> \n> perhaps this information could help.\n \nWhat would help more is the actual query, if that can be shared. It\nleaves a lot less to the imagination than descriptions of it.\n \nThere are a couple things which have been requested which would help\npin down the reason the optimizer is not getting to a good plan, so\nthat it can be allowed to do a good job. As Tom said, this would be a\nmuch more productive focus than casting about for ways to force it to\ndo what you think is the best thing. (Maybe, given the chance, it can\ncome up with a plan which runs in seconds, rather than over the 24\nminutes you've gotten.)\n \nWith all the optimizer options on, and the from_collapse_limit and\njoin_collapse_limit values both set to 100, run an EXPLAIN (no\nANALYZE) on your big problem query. Let us know how long the EXPLAIN\nruns. If it gets any errors, copy and paste all available\ninformation. (General descriptions aren't likely to get us very far.)\nSince EXPLAIN without ANALYZE only *plans* the query, but doesn't run\nit, it should not take long to do this.\n \nIf there are any views or custom functions involved, showing those\nalong with the query source would be good.\n \nIf we get this information, we have a much better chance to find the\nreal problem and get it fixed.\n \n-Kevin\n", "msg_date": "Thu, 18 Jun 2009 14:26:58 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> With all the optimizer options on, and the from_collapse_limit and\n> join_collapse_limit values both set to 100, run an EXPLAIN (no\n> ANALYZE) on your big problem query. Let us know how long the EXPLAIN\n> runs. If it gets any errors, copy and paste all available\n> information. (General descriptions aren't likely to get us very far.)\n> Since EXPLAIN without ANALYZE only *plans* the query, but doesn't run\n> it, it should not take long to do this.\n\nOne issue here is that with the collapse limits cranked up to more than\ngeqo_threshold, he's going to be coping with GEQO's partially-random\nplan selection; so whatever he reports might or might not be especially\nreflective of day-to-day results. I'm tempted to ask that he also push\nup geqo_threshold. It's possible that that *will* send the planning\ntime to the moon; but it would certainly be worth trying, to find out\nwhat plan is produced.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jun 2009 15:39:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query " }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> With all the optimizer options on, and the from_collapse_limit and\n>> join_collapse_limit values both set to 100, run an EXPLAIN (no\n>> ANALYZE) on your big problem query. Let us know how long the\n>> EXPLAIN runs. If it gets any errors, copy and paste all available\n>> information. (General descriptions aren't likely to get us very\n>> far.) Since EXPLAIN without ANALYZE only *plans* the query, but\n>> doesn't run it, it should not take long to do this.\n> \n> One issue here is that with the collapse limits cranked up to more\n> than geqo_threshold, he's going to be coping with GEQO's partially-\n> random plan selection; so whatever he reports might or might not be\n> especially reflective of day-to-day results. I'm tempted to ask\n> that he also push up geqo_threshold.\n \nIn an earlier post[1] he said that he had geqo turned off. It does\npay to be explicit, though; I'd hate to assume it's of if he's been\nchanging things.\n \nAlberto, please ensure that you still have geqo off when you run the\ntest I suggested. Also, I see that I didn't explicitly say that you\nshould send the ANALYZE output, but that's what would be helpful.\n \n> It's possible that that *will* send the planning time to the moon;\n> but it would certainly be worth trying, to find out what plan is\n> produced.\n \nAgreed. What plan is produced, and how long that takes. (And whether\nhe gets an out of memory error.) I figured it was best to get a clear\nanswer to those before moving on....\n \n-Kevin\n \n[1]\nhttp://archives.postgresql.org/pgsql-performance/2009-06/msg00186.php\n", "msg_date": "Thu, 18 Jun 2009 14:54:38 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance with query" } ]
[ { "msg_contents": "yes, I have to make that because the data on the table need to be\npivoted so it is joined many times with different filter on the column\nthat describe the meaning of the column called numeric_value I'm going\nto show.\nThat could be very ineffective, event because that table contains\nsomething like 25000000 rows...\nThere are two tables in this condition (as you can se in the explain)\nand both are the table with the higher number of rows in the database.\nBut I don's see any other choice to obtain that information.\n\nP.S.: i'm trying with all enable_* to on and pumping to higher values\nfrom_collapse_limit and join_collapse_limit that I've put to 30.\nThe result is that the query, after an hour of work, goes out of memory\n(SQL State 53200)...\n\n", "msg_date": "Wed, 17 Jun 2009 11:33:59 +0200", "msg_from": "Alberto Dalmaso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "Alberto Dalmaso <[email protected]> wrote:\n \n> P.S.: i'm trying with all enable_* to on and pumping to higher\n> values from_collapse_limit and join_collapse_limit that I've put to\n> 30.\n \nTom suggested that you set those numbers higher than the number of\ntables joined in the query. I don't think 30 will do that.\n \n> The result is that the query, after an hour of work, goes out of\n> memory (SQL State 53200)...\n \nOuch! Can you provide more details? All information from the\nPostgreSQL log about that event would be good. If there's anything\nwhich might be related in the OS logs from around that time, please\ninclude that, too.\n \nAlso, with those settings at a high value, try running just an EXPLAIN\n(no ANALYZE) of the query, to see how long that takes, and whether you\nhave a memory issue during the planning phase. (You can use \\timing\nin psql to get a report of the run time of the EXPLAIN.)\n \n-Kevin\n", "msg_date": "Wed, 17 Jun 2009 09:54:28 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query." }, { "msg_contents": "Alberto Dalmaso <[email protected]> writes:\n> P.S.: i'm trying with all enable_* to on and pumping to higher values\n> from_collapse_limit and join_collapse_limit that I've put to 30.\n> The result is that the query, after an hour of work, goes out of memory\n> (SQL State 53200)...\n\nHmm, is that happening during planning (ie, do you get the same error\nif you just try to EXPLAIN the query with those settings)? If not,\nplease show the EXPLAIN output.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 11:07:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up a query. " } ]
[ { "msg_contents": "I promised to provide more details of the query (or the function as it is). Here goes.\n\n \n\nScenario: \n\nA chemotherapy regimen requires chair time and nursing time. A patient might sit in the chair for three hours but the nurse only has to be with them for the first hour. Therefore, nurses can manage multiple chairs at a time. Each regimen has a different time requirement.\n\n \n\nTo efficiently manage our chair and nursing resources, we want to schedule against these constraints. Our room currently has 17 chairs and around 8 nurses per day. We administer several hundred different regimens and the time for each regimen varies based on the day of the regimen as well as the course. All of these variables are entered and maintained through a web application I wrote.\n\n \n\nScheduling algorithm:\n\n Written in PostgreSQL (naturally), the algorithm is a single function call. It gathers the data for a day into a temporary table and cycles through each appointment. Appointments are scheduled in the following order: locked appointments (previously scheduled and assigned to a nurse and chair), reserved appointments (a desired time slot has been selected), open appointments (ordered by the required chair time descending and the required nurse time descending). Here's the busy part that loops through each appointment. The table definition follows. Anything beginning with an underscore is a declared variable.\n\n \n\n \n\n-- Reserved and unscheduled appointments.\n\nFOR _APPOINTMENT IN SELECT * FROM MATRIX_UNSCHEDULED WHERE APPT_STATUS <> 'L' ORDER BY ROW_NUM\n\nLOOP\n\n -- Initialize the variables for this record.\n\n RAISE NOTICE 'Status ''%'' - %', _APPOINTMENT.APPT_STATUS, _APPOINTMENT;\n\n _AVAILABLE := null;\n\n select into _UNIT_INTERVALS, _NURSE_INTERVALS, _UNIT_REQUIRED, _NURSE_REQUIRED \n\n _APPOINTMENT.total_unit_time / 5,\n\n _APPOINTMENT.total_nurse_time / 5,\n\n (_APPOINTMENT.total_unit_time || ' minutes')::INTERVAL,\n\n (_APPOINTMENT.total_nurse_time || ' minutes')::INTERVAL;\n\n \n\n \n\n -- Find the first available row for the required unit and nurse time.\n\n select into _AVAILABLE unit.row_num\n\n from (\n\n select m1.row_num\n\n from matrix m1,\n\n matrix m2\n\n where m1.unit_id = m2.unit_id\n\n and m1.nurse_id = m2.nurse_id\n\n and m1.unit_scheduled = false\n\n and m2.unit_scheduled = false\n\n and (_APPOINTMENT.reserved_time is null or m1.timeslot = _APPOINTMENT.reserved_time)\n\n and m2.timeslot between m1.timeslot and (m1.timeslot + _UNIT_REQUIRED)\n\n group by m1.row_num\n\n having count(m2.row_num) = _UNIT_INTERVALS + 1\n\n ) unit,\n\n (\n\n select m1.row_num\n\n from matrix m1,\n\n matrix m2\n\n where m1.unit_id = m2.unit_id\n\n and m1.nurse_id = m2.nurse_id\n\n and m1.nurse_scheduled = false\n\n and m2.nurse_scheduled = false\n\n and (_APPOINTMENT.reserved_time is null or m1.timeslot = _APPOINTMENT.reserved_time)\n\n and m2.timeslot between m1.timeslot and (m1.timeslot + _NURSE_REQUIRED)\n\n group by m1.row_num\n\n having count(m1.row_num) = _NURSE_INTERVALS + 1\n\n ) nurse\n\n where nurse.row_num = unit.row_num\n\n order by unit.row_num\n\n limit 1;\n\n \n\n -- Assign the time, unit, and nurse to the unscheduled appointment.\n\n update matrix_unscheduled set\n\n appt_time = matrix.timeslot,\n\n unit_id = matrix.unit_id,\n\n nurse_id = matrix.nurse_id,\n\n appt_status = 'S'\n\n from matrix\n\n where schedule_appt_id = _APPOINTMENT.schedule_appt_id\n\n and matrix.row_num = _AVAILABLE;\n\n \n\n -- Mark the unit as scheduled for that time.\n\n update matrix set\n\n unit_scheduled = true\n\n from (select timeslot, unit_id from matrix where row_num = _AVAILABLE) m2\n\n where matrix.unit_id = m2.unit_id\n\n and matrix.timeslot between m2.timeslot and (m2.timeslot + _UNIT_REQUIRED);\n\n \n\n -- Mark the nurse as scheduled for that time.\n\n update matrix set\n\n nurse_scheduled = true\n\n from (select timeslot, nurse_id from matrix where row_num = _AVAILABLE) m2\n\n where matrix.nurse_id = m2.nurse_id\n\n and matrix.timeslot between m2.timeslot and (m2.timeslot + _NURSE_REQUIRED);\n\n \n\nEND LOOP;\n\n \n\n \n\nCREATE TABLE matrix_unscheduled\n\n(\n\n row_num serial NOT NULL,\n\n schedule_appt_id integer NOT NULL,\n\n appt_time timestamp without time zone,\n\n reserved_time timestamp without time zone,\n\n appt_status character(1) NOT NULL,\n\n unit_id integer,\n\n nurse_id integer,\n\n total_unit_time integer NOT NULL,\n\n total_nurse_time integer NOT NULL,\n\n CONSTRAINT pk_matrix_unscheduled PRIMARY KEY (row_num)\n\n)\n\nWITH (OIDS=FALSE);\n\n \n\nCREATE TABLE matrix\n\n(\n\n row_num serial NOT NULL,\n\n timeslot timestamp without time zone NOT NULL,\n\n unit_id integer NOT NULL,\n\n nurse_id integer NOT NULL,\n\n unit_scheduled boolean NOT NULL,\n\n nurse_scheduled boolean NOT NULL,\n\n CONSTRAINT pk_matrix PRIMARY KEY (row_num)\n\n)\n\nWITH (OIDS=FALSE);\n\n \n\nThere are indexes on \"matrix\" for \"timeslot,unit_id\", \"timeslot,nurse_id\", and \"unit_id,nurse_id\".\n\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294 \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI promised to provide more details of the query (or the\nfunction as it is). Here goes.\n \nScenario: \nA chemotherapy regimen requires\nchair time and nursing time. A patient might sit in the chair for three hours\nbut the nurse only has to be with them for the first hour. Therefore, nurses\ncan manage multiple chairs at a time. Each regimen has a different time\nrequirement.\n \nTo efficiently manage our chair and\nnursing resources, we want to schedule against these constraints. Our room\ncurrently has 17 chairs and around 8 nurses per day. We administer several\nhundred different regimens and the time for each regimen varies based on the\nday of the regimen as well as the course. All of these variables are entered\nand maintained through a web application I wrote.\n \nScheduling algorithm:\n            Written in PostgreSQL (naturally), the algorithm\nis a single function call. It gathers the data for a day into a temporary table\nand cycles through each appointment. Appointments are scheduled in the following\norder: locked appointments (previously scheduled and assigned to a nurse and\nchair), reserved appointments (a desired time slot has been selected), open\nappointments (ordered by the required chair time descending and the required\nnurse time descending). Here’s the busy part that loops through each\nappointment. The table definition follows. Anything beginning with an\nunderscore is a declared variable.\n \n \n-- Reserved and unscheduled appointments.\nFOR _APPOINTMENT IN SELECT * FROM MATRIX_UNSCHEDULED WHERE\nAPPT_STATUS <> 'L' ORDER BY ROW_NUM\nLOOP\n            -- Initialize the variables for this record.\n        RAISE NOTICE 'Status ''%'' - %',\n_APPOINTMENT.APPT_STATUS, _APPOINTMENT;\n            _AVAILABLE := null;\n            select into _UNIT_INTERVALS, _NURSE_INTERVALS,\n_UNIT_REQUIRED, _NURSE_REQUIRED \n                        _APPOINTMENT.total_unit_time / 5,\n                        _APPOINTMENT.total_nurse_time / 5,\n                        (_APPOINTMENT.total_unit_time || '\nminutes')::INTERVAL,\n                        (_APPOINTMENT.total_nurse_time || '\nminutes')::INTERVAL;\n            \n \n            -- Find the first available row for the required\nunit and nurse time.\n            select into _AVAILABLE unit.row_num\n            from     (\n                                    select   m1.row_num\n                                    from     matrix m1,\n                                                matrix m2\n                                    where    m1.unit_id =\nm2.unit_id\n                                                and\nm1.nurse_id = m2.nurse_id\n                                                and\nm1.unit_scheduled = false\n                                                and\nm2.unit_scheduled = false\n                                                and\n(_APPOINTMENT.reserved_time is null or m1.timeslot = _APPOINTMENT.reserved_time)\n                                                and\nm2.timeslot between m1.timeslot and (m1.timeslot + _UNIT_REQUIRED)\n                                    group by m1.row_num\n                                    having count(m2.row_num)\n= _UNIT_INTERVALS + 1\n                        ) unit,\n                        (\n                                    select   m1.row_num\n                                    from     matrix m1,\n                                                matrix m2\n                                    where    m1.unit_id =\nm2.unit_id\n                                                and\nm1.nurse_id = m2.nurse_id\n                                                and\nm1.nurse_scheduled = false\n                                                and\nm2.nurse_scheduled = false\n                                                and\n(_APPOINTMENT.reserved_time is null or m1.timeslot =\n_APPOINTMENT.reserved_time)\n                                                and\nm2.timeslot between m1.timeslot and (m1.timeslot + _NURSE_REQUIRED)\n                                    group by m1.row_num\n                                    having count(m1.row_num)\n= _NURSE_INTERVALS + 1\n                        ) nurse\n            where    nurse.row_num = unit.row_num\n            order by unit.row_num\n            limit 1;\n \n            -- Assign the time, unit, and nurse to the\nunscheduled appointment.\n            update matrix_unscheduled set\n                        appt_time = matrix.timeslot,\n                        unit_id = matrix.unit_id,\n                        nurse_id = matrix.nurse_id,\n                        appt_status = 'S'\n            from     matrix\n            where    schedule_appt_id =\n_APPOINTMENT.schedule_appt_id\n                        and matrix.row_num = _AVAILABLE;\n \n            -- Mark the unit as scheduled for that time.\n            update matrix set\n                        unit_scheduled = true\n            from     (select timeslot, unit_id from matrix\nwhere row_num = _AVAILABLE) m2\n            where    matrix.unit_id = m2.unit_id\n                        and matrix.timeslot between\nm2.timeslot and (m2.timeslot + _UNIT_REQUIRED);\n            \n            -- Mark the nurse as scheduled for that time.\n            update matrix set\n                        nurse_scheduled = true\n            from     (select timeslot, nurse_id from matrix\nwhere row_num = _AVAILABLE) m2\n            where    matrix.nurse_id = m2.nurse_id\n                        and matrix.timeslot between\nm2.timeslot and (m2.timeslot + _NURSE_REQUIRED);\n \nEND LOOP;\n \n \nCREATE TABLE matrix_unscheduled\n(\n  row_num serial NOT NULL,\n  schedule_appt_id integer NOT NULL,\n  appt_time timestamp without time zone,\n  reserved_time timestamp without time zone,\n  appt_status character(1) NOT NULL,\n  unit_id integer,\n  nurse_id integer,\n  total_unit_time integer NOT NULL,\n  total_nurse_time integer NOT NULL,\n  CONSTRAINT pk_matrix_unscheduled PRIMARY KEY (row_num)\n)\nWITH (OIDS=FALSE);\n \nCREATE TABLE matrix\n(\n  row_num serial NOT NULL,\n  timeslot timestamp without time zone NOT NULL,\n  unit_id integer NOT NULL,\n  nurse_id integer NOT NULL,\n  unit_scheduled boolean NOT NULL,\n  nurse_scheduled boolean NOT NULL,\n  CONSTRAINT pk_matrix PRIMARY KEY (row_num)\n)\nWITH (OIDS=FALSE);\n \nThere are indexes on “matrix” for “timeslot,unit_id”,\n“timeslot,nurse_id”, and “unit_id,nurse_id”.\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294", "msg_date": "Wed, 17 Jun 2009 10:58:52 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up a query." } ]
[ { "msg_contents": "Hi, sorry about the blank post yesterday - let's try again\n\n \n\nWe have two machines. Both running Linux Redhat, both running postgres\n8.2.5.\n\nBoth have nearly identical 125 GB databases. In fact we use PITR Recovery\nto \n\nReplicate from one to the other. The machine we replicate to runs a query\nwith\n\nAbout 10 inner and left joins about 5 times slower than the original machine\n\nI run an explain on both. Machine1 (original) planner favors hash joins\nabout 3 to 1\n\nOver nested loop joins. Machine2 (replicated) uses only nested loop joins -\nno hash at all.\n\n \n\nA few details - I can always provide more\n\n \n\n MACHINE1 - original:\n\n TOTAL RAW MEMORY - 30 GB\n\n TOTAL SHARED MEMORY (shmmax value) - 4 GB\n\n \n\n Database configs\n\n SHARED_BUFFERS ------------- 1525 MB\n\n MAX_PREPARED_TRANSACTIONS - 5\n\n WORK_MEM - -------------------- 300 MB\n\n MAINTENANCE_WORK_MEM - 512 MB \n\n MAX_FSM_PAGES -------------- 3,000,000\n\n CHECKPOINT_SEGMENTS ----- 64\n\n WAL_BUFFERS --------------------- 768\n\n EFFECTIVE_CACHE_SIZE ---- 2 GB\n\n Planner method configs all turned on by default, including\nenable_hashjoin\n\n \n\n MACHINE2 - we run 2 postgres instances. Port 5433 runs continuous PITR\nrecoveries\n\n Port 5432 receives the 'latest and greatest' database when port 5433\nfinishes a recovery\n\n TOTAL RAW MEMORY - 16 GB (this is a VMWARE setup on a netapp)\n\n TOTAL SHARED MEMORY (shmmax value) - 4 GB\n\n \n\n Database configs - port 5432 instance\n\n SHARED_BUFFERS ------------ 1500 MB\n\n MAX_PREPARED_TRANSACTIONS - 1 (we don't run prepared transactions\nhere)\n\n WORK_MEM - -------------------- 300 MB\n\n MAINTENANCE_WORK_MEM - 100 MB (don't think this comes into play\nin this conversation)\n\n MAX_FSM_PAGES -------------- 1,000,000\n\n CHECKPOINT_SEGMENTS ----- 32\n\n WAL_BUFFERS --------------------- 768\n\n EFFECTIVE_CACHE_SIZE ---- 2 GB\n\n Planner method configs all turned on by default, including\nenable_hashjoin\n\n \n\n Database configs - port 5433 instance\n\n SHARED_BUFFERS ------------ 1500 MB\n\n MAX_PREPARED_TRANSACTIONS - 1 (we don't run prepared transactions\nhere)\n\n WORK_MEM - -------------------- 250 MB\n\n MAINTENANCE_WORK_MEM - 100 MB (don't think this comes into play\nin this conversation)\n\n MAX_FSM_PAGES -------------- 1,000,000\n\n CHECKPOINT_SEGMENTS ----- 32\n\n WAL_BUFFERS --------------------- 768\n\n EFFECTIVE_CACHE_SIZE ---- 2 GB\n\n Planner method configs all turned on by default, including\nenable_hashjoin\n\n \n\n Now some size details about the 11 tables involved in the join\n\n All join fields are indexed unless otherwise noted and are of type\ninteger unless otherwise noted\n\n \n\n TABLE1 -------------398 pages\n\n TABLE2 -------- 5,014 pages INNER JOIN on TABLE1\n\n TABLE3 ------- 34,729 pages INNER JOIN on TABLE2 \n\n TABLE4 ----1,828,000 pages INNER JOIN on TABLE2\n\n TABLE5 ----1,838,000 pages INNER JOIN on TABLE4\n\n TABLE6 ------ 122,500 pages INNER JOIN on TABLE4 \n\n TABLE7 ----------- 621 pages INNER JOIN on TABLE6\n\n TABLE8 ---------- 4 pages INNER JOIN on TABLE7 (TABLE7 column\nnot indexed)\n\n TABLE9 ----------- 2 pages INNER JOIN on TABLE8 (TABLE8 column\nnot indexed)\n\n TABLE10 --------- 13 pages LEFT JOIN on TABLE6 (columns on both\ntables text, neither column indexed)\n\n TABLE11 -1,976,430 pages LEFT JOIN on TABLE5. AND explicit join on\nTABLE6\n\n The WHERE clause filters out primary key values from TABLE1 to 1\nvalue and a 1 month range of \n\n Indexed dates from TABLE4.\n\n \n\n So, my guess is the disparity of performance (40 seconds vs 180 seconds)\nhas to do with MACHINE2 not\n\n Availing itself of hash joins which by my understanding is much faster.\n\n \n\nAny help / insight appreciated. Thank you\n\n \n\n \n\n \n\n \n\n \n\n \n\nMark Steben│Database Administrator│ \n\n@utoRevenue-R- \"Join the Revenue-tion\"\n95 Ashley Ave. West Springfield, MA., 01089 \n413-243-4800 x1512 (Phone) │ 413-732-1824 (Fax)\n\n@utoRevenue is a registered trademark and a division of Dominion Enterprises\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi, sorry about the blank post yesterday – let’s\ntry again\n \nWe have two machines.  Both running Linux Redhat, both\nrunning postgres 8.2.5.\nBoth have nearly identical 125 GB databases.  In fact we use\nPITR Recovery to \nReplicate from one to the other.  The machine we replicate\nto runs a query with\nAbout 10 inner and left joins about 5 times slower than the original\nmachine\nI run an explain on both.  Machine1 (original) planner\nfavors hash joins about 3 to 1\nOver nested loop joins.  Machine2 (replicated) uses only\nnested loop joins – no hash at all.\n \nA few details – I can always provide more\n \n MACHINE1 – original:\n    TOTAL RAW MEMORY – 30 GB\n    TOTAL SHARED MEMORY (shmmax value) – 4 GB\n     \n     Database configs\n          SHARED_BUFFERS ------------– 1525 MB\n          MAX_PREPARED_TRANSACTIONS – 5\n          WORK_MEM – --------------------    300 MB\n          MAINTENANCE_WORK_MEM - 512 MB \n          MAX_FSM_PAGES -------------- 3,000,000\n          CHECKPOINT_SEGMENTS -----         64\n          WAL_BUFFERS ---------------------        768\n           EFFECTIVE_CACHE_SIZE ----       2 GB\n          Planner method configs all turned on by default,\nincluding enable_hashjoin\n  \n   MACHINE2 – we run 2 postgres instances.  Port 5433\nruns continuous PITR recoveries\n       Port 5432 receives the ‘latest and greatest’\ndatabase when port 5433 finishes a recovery\n          TOTAL RAW MEMORY – 16 GB (this is a VMWARE\nsetup on a netapp)\n          TOTAL SHARED MEMORY (shmmax value) – 4 GB\n \n         Database configs – port 5432 instance\n           SHARED_BUFFERS -----------– 1500 MB\n           MAX_PREPARED_TRANSACTIONS – 1 (we don’t\nrun prepared transactions here)\n          WORK_MEM – --------------------    300 MB\n          MAINTENANCE_WORK_MEM - 100 MB  (don’t think\nthis comes into play in this conversation)\n          MAX_FSM_PAGES -------------- 1,000,000\n          CHECKPOINT_SEGMENTS -----         32\n          WAL_BUFFERS ---------------------        768\n          EFFECTIVE_CACHE_SIZE ----       2 GB\n          Planner method configs all turned on by default,\nincluding enable_hashjoin\n \n            Database configs – port 5433 instance\n           SHARED_BUFFERS -----------– 1500 MB\n           MAX_PREPARED_TRANSACTIONS – 1 (we don’t\nrun prepared transactions here)\n          WORK_MEM – --------------------    250 MB\n          MAINTENANCE_WORK_MEM - 100 MB  (don’t think\nthis comes into play in this conversation)\n          MAX_FSM_PAGES -------------- 1,000,000\n          CHECKPOINT_SEGMENTS -----         32\n          WAL_BUFFERS ---------------------        768\n          EFFECTIVE_CACHE_SIZE ----       2 GB\n          Planner method configs all turned on by default,\nincluding enable_hashjoin\n \n   Now some size details about the 11 tables involved in the\njoin\n         All join fields are indexed unless otherwise noted and\nare of type integer unless otherwise noted\n \n        TABLE1  -------------398 pages\n        TABLE2  --------  5,014 pages INNER JOIN on TABLE1\n        TABLE3  ------- 34,729 pages INNER JOIN on TABLE2 \n        TABLE4 ----1,828,000 pages INNER JOIN on TABLE2\n        TABLE5 ----1,838,000 pages INNER JOIN on TABLE4\n        TABLE6 ------ 122,500 pages INNER JOIN on TABLE4         \n        TABLE7 -----------  621 pages INNER JOIN on TABLE6\n        TABLE8  ----------     4 pages INNER JOIN on TABLE7\n(TABLE7 column not indexed)\n        TABLE9 -----------     2 pages INNER JOIN on TABLE8\n(TABLE8 column not indexed)\n        TABLE10 ---------   13 pages LEFT JOIN on TABLE6  (columns\non both tables text, neither column indexed)\n        TABLE11 -1,976,430 pages LEFT JOIN on TABLE5. AND explicit\njoin on TABLE6\n           The WHERE clause filters out primary key values\nfrom TABLE1 to 1 value and a 1 month range of \n               Indexed dates from TABLE4.\n \n So, my guess is the disparity of performance (40 seconds vs\n180 seconds) has to do with MACHINE2 not\n Availing itself of hash joins which by my understanding is\nmuch faster.\n \nAny help / insight appreciated.  Thank you\n \n  \n               \n \n \n \nMark Steben│Database\nAdministrator│ \n@utoRevenue­®­ \"Join the Revenue-tion\"\n95 Ashley Ave. West Springfield, MA., 01089 \n413-243-4800 x1512 (Phone) │ 413-732-1824 (Fax)\n@utoRevenue is a registered\ntrademark and a division of Dominion Enterprises", "msg_date": "Wed, 17 Jun 2009 13:12:51 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue - 2 linux machines, identical configs,\n\tdifferent performance" }, { "msg_contents": "2009/6/17 Mark Steben <[email protected]>:\n> A few details – I can always provide more\n\nCould you send:\n\n1. Exact text of query.\n\n2. EXPLAIN ANALYZE output on each machine.\n\n3. VACUUM VERBOSE output on each machine, or at least the last 10 lines.\n\n...Robert\n", "msg_date": "Wed, 17 Jun 2009 13:31:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue - 2 linux machines, identical\n\tconfigs, different performance" }, { "msg_contents": "\n>We have two machines. Both running Linux Redhat, both running postgres\n8.2.5.\n>Both have nearly identical 125 GB databases. In fact we use PITR Recovery\nto \n>Replicate from one to the other. \n\nI have to ask the obvious question. Do you regularly analyze the machine\nyou replicate too?\n\n\nDave\n\n\n", "msg_date": "Wed, 17 Jun 2009 12:39:22 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue - 2 linux machines, identical configs,\n\tdifferent performance" }, { "msg_contents": "Yes I analyze after each replication.\n\nMark Steben│Database Administrator│ \n\n@utoRevenue-R- \"Join the Revenue-tion\"\n95 Ashley Ave. West Springfield, MA., 01089 \n413-243-4800 x1512 (Phone) │ 413-732-1824 (Fax)\n\n@utoRevenue is a registered trademark and a division of Dominion Enterprises\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Dave Dutcher\nSent: Wednesday, June 17, 2009 1:39 PM\nTo: 'Mark Steben'; [email protected]\nCc: 'Rich Garabedian'\nSubject: Re: [PERFORM] Performance issue - 2 linux machines, identical\nconfigs, different performance\n\n\n>We have two machines. Both running Linux Redhat, both running postgres\n8.2.5.\n>Both have nearly identical 125 GB databases. In fact we use PITR Recovery\nto \n>Replicate from one to the other. \n\nI have to ask the obvious question. Do you regularly analyze the machine\nyou replicate too?\n\n\nDave\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n", "msg_date": "Wed, 17 Jun 2009 14:05:10 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue - 2 linux machines, identical configs,\n\tdifferent performance" } ]
[ { "msg_contents": "There are 4 threads (4 postgres processes) loading all rows from a table \nwith 50,018 rows. The table has a int8 PK that is incremented by 1 for \neach new row and the PK is used by the threads to partition the rows so \nthat each loads distinct rows. As you can see below, these 4 SELECTs \nhave been running since 6:30am (it's now 11:30am) -- sluggish at best -- \nand each of the postgres processes is using 100% CPU. The table schema \nis long (~160 cols), so I'm omitting it but will provide it if deemed \nnecessary. Any ideas about the cause of this are appreciated.\n\nThanks,\nBrian\n\ncemdb=# select procpid, xact_start, current_query from pg_stat_activity; \n procpid | xact_start | \n current_query\n---------+-------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------\n 27825 | | <IDLE>\n 27826 | | <IDLE>\n 27824 | | <IDLE>\n 27828 | | <IDLE>\n 27827 | | <IDLE>\n 27829 | | <IDLE>\n 27830 | | <IDLE>\n 27831 | | <IDLE>\n 27832 | | <IDLE>\n 27833 | | <IDLE>\n 14031 | 2009-06-17 05:48:02.931503-07 | autovacuum: VACUUM ANALYZE \npublic.ts_stats_transet_user_weekly\n 16044 | | <IDLE>\n 32169 | 2009-06-17 08:17:39.034142-07 | autovacuum: VACUUM ANALYZE \npublic.ts_stats_transetgroup_user_weekly\n 7165 | | <IDLE>\n 16043 | | <IDLE>\n 22130 | 2009-06-17 11:22:05.339582-07 | select procpid, xact_start, \ncurrent_query from pg_stat_activity;\n 7169 | 2009-06-17 06:31:26.997641-07 | select a.ts_id as id1_0_, \na.version_info as versionI2_1_0_, a.ts_user_id as userId1_0_, null as \ntranSetId1_0_, null as tranUnitId1_0_, null as userGro10_1_0_, \na.ts_transet_group_id as tranSetG7_1_0_, a.ts_last_aggregated_row as \nlastAgg12_1_0_, null as tranSetI5_1_0_, a.ts_user_incarnation_id as \nuserInca9_1_0_, a.ts_interval_start_time as interva11_1_0_, \na.ts_interval_wire_time as interva13_1_0_, a.ts_total_transactions as \ntotalTr14_1_0_, a.ts_bad_transactions as badTran15_1_0_, \na.ts_opportunities as opportu16_1_0_, a.ts_defects as defects1_0_, \na.ts_data_type as dataType1_0_, a.ts_tth_type as tthType1_0_, \na.ts_tth_lower_spec as tthLowe20_1_0_, a.ts_tth_upper_spec as \ntthUppe21_1_0_, a.ts_tth_b0 as tthB22_1_0_, a.ts_tth_b1 as tthB23_1_0_, \na.ts_tth_b2 as tthB24_1_0_, a.ts_tth_b3 as tthB25_1_0_, a.ts_tth_b4 as \ntthB26_1_0_, a.ts_tth_b5 as tthB27_1_0_, a.ts_tth_b6 as tthB28_1_0_, \na.ts_tth_b7 as tthB29_1_0_, a.ts_tth_b8 as tthB30_1_0_, a.ts_tth_b9 as \ntthB31_1_0_, a.ts_tth_b10 as tthB32_1_0_, a.ts_tth_b11 as tthB33_1\n 7171 | | <IDLE>\n 7172 | | <IDLE>\n 28106 | | <IDLE>\n 7392 | 2009-06-17 06:31:26.997985-07 | select a.ts_id as id1_0_, \na.version_info as versionI2_1_0_, a.ts_user_id as userId1_0_, null as \ntranSetId1_0_, null as tranUnitId1_0_, null as userGro10_1_0_, \na.ts_transet_group_id as tranSetG7_1_0_, a.ts_last_aggregated_row as \nlastAgg12_1_0_, null as tranSetI5_1_0_, a.ts_user_incarnation_id as \nuserInca9_1_0_, a.ts_interval_start_time as interva11_1_0_, \na.ts_interval_wire_time as interva13_1_0_, a.ts_total_transactions as \ntotalTr14_1_0_, a.ts_bad_transactions as badTran15_1_0_, \na.ts_opportunities as opportu16_1_0_, a.ts_defects as defects1_0_, \na.ts_data_type as dataType1_0_, a.ts_tth_type as tthType1_0_, \na.ts_tth_lower_spec as tthLowe20_1_0_, a.ts_tth_upper_spec as \ntthUppe21_1_0_, a.ts_tth_b0 as tthB22_1_0_, a.ts_tth_b1 as tthB23_1_0_, \na.ts_tth_b2 as tthB24_1_0_, a.ts_tth_b3 as tthB25_1_0_, a.ts_tth_b4 as \ntthB26_1_0_, a.ts_tth_b5 as tthB27_1_0_, a.ts_tth_b6 as tthB28_1_0_, \na.ts_tth_b7 as tthB29_1_0_, a.ts_tth_b8 as tthB30_1_0_, a.ts_tth_b9 as \ntthB31_1_0_, a.ts_tth_b10 as tthB32_1_0_, a.ts_tth_b11 as tthB33_1\n 7396 | | <IDLE>\n 7397 | 2009-06-17 06:31:26.998013-07 | select a.ts_id as id1_0_, \na.version_info as versionI2_1_0_, a.ts_user_id as userId1_0_, null as \ntranSetId1_0_, null as tranUnitId1_0_, null as userGro10_1_0_, \na.ts_transet_group_id as tranSetG7_1_0_, a.ts_last_aggregated_row as \nlastAgg12_1_0_, null as tranSetI5_1_0_, a.ts_user_incarnation_id as \nuserInca9_1_0_, a.ts_interval_start_time as interva11_1_0_, \na.ts_interval_wire_time as interva13_1_0_, a.ts_total_transactions as \ntotalTr14_1_0_, a.ts_bad_transactions as badTran15_1_0_, \na.ts_opportunities as opportu16_1_0_, a.ts_defects as defects1_0_, \na.ts_data_type as dataType1_0_, a.ts_tth_type as tthType1_0_, \na.ts_tth_lower_spec as tthLowe20_1_0_, a.ts_tth_upper_spec as \ntthUppe21_1_0_, a.ts_tth_b0 as tthB22_1_0_, a.ts_tth_b1 as tthB23_1_0_, \na.ts_tth_b2 as tthB24_1_0_, a.ts_tth_b3 as tthB25_1_0_, a.ts_tth_b4 as \ntthB26_1_0_, a.ts_tth_b5 as tthB27_1_0_, a.ts_tth_b6 as tthB28_1_0_, \na.ts_tth_b7 as tthB29_1_0_, a.ts_tth_b8 as tthB30_1_0_, a.ts_tth_b9 as \ntthB31_1_0_, a.ts_tth_b10 as tthB32_1_0_, a.ts_tth_b11 as tthB33_1\n 7403 | 2009-06-17 06:31:26.998273-07 | select a.ts_id as id1_0_, \na.version_info as versionI2_1_0_, a.ts_user_id as userId1_0_, null as \ntranSetId1_0_, null as tranUnitId1_0_, null as userGro10_1_0_, \na.ts_transet_group_id as tranSetG7_1_0_, a.ts_last_aggregated_row as \nlastAgg12_1_0_, null as tranSetI5_1_0_, a.ts_user_incarnation_id as \nuserInca9_1_0_, a.ts_interval_start_time as interva11_1_0_, \na.ts_interval_wire_time as interva13_1_0_, a.ts_total_transactions as \ntotalTr14_1_0_, a.ts_bad_transactions as badTran15_1_0_, \na.ts_opportunities as opportu16_1_0_, a.ts_defects as defects1_0_, \na.ts_data_type as dataType1_0_, a.ts_tth_type as tthType1_0_, \na.ts_tth_lower_spec as tthLowe20_1_0_, a.ts_tth_upper_spec as \ntthUppe21_1_0_, a.ts_tth_b0 as tthB22_1_0_, a.ts_tth_b1 as tthB23_1_0_, \na.ts_tth_b2 as tthB24_1_0_, a.ts_tth_b3 as tthB25_1_0_, a.ts_tth_b4 as \ntthB26_1_0_, a.ts_tth_b5 as tthB27_1_0_, a.ts_tth_b6 as tthB28_1_0_, \na.ts_tth_b7 as tthB29_1_0_, a.ts_tth_b8 as tthB30_1_0_, a.ts_tth_b9 as \ntthB31_1_0_, a.ts_tth_b10 as tthB32_1_0_, a.ts_tth_b11 as tthB33_1\n 32571 | 2009-06-16 19:03:16.645352-07 | autovacuum: VACUUM ANALYZE \npublic.ts_stats_transet_user_daily\n(25 rows)\n\ncemdb=# select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c \njoin pg_locks l on c.oid=l.relation order by l.pid;\n oid | relname | \n pid | mode | granted\n----------+----------------------------------------------------------+-------+--------------------------+---------\n 26612634 | ts_stats_transetgroup_user_daily_monthindex | \n 7169 | AccessShareLock | t\n 26612631 | ts_stats_transetgroup_user_daily_dayindex | \n 7169 | AccessShareLock | t\n 26612532 | ts_stats_transet_user_interval_monthindex | \n 7169 | AccessShareLock | t\n 26612322 | ts_stats_transet_user_interval_pkey | \n 7169 | AccessShareLock | t\n 26611729 | ts_stats_transet_user_interval | \n 7169 | AccessShareLock | t\n 26612639 | ts_stats_transetgroup_user_daily_yearindex | \n 7169 | AccessShareLock | t\n 26612638 | ts_stats_transetgroup_user_daily_weekindex | \n 7169 | AccessShareLock | t\n 26612635 | ts_stats_transetgroup_user_daily_starttimeindex | \n 7169 | AccessShareLock | t\n 26612538 | ts_stats_transet_user_interval_yearindex | \n 7169 | AccessShareLock | t\n 26612530 | ts_stats_transet_user_interval_dayindex | \n 7169 | AccessShareLock | t\n 26612531 | ts_stats_transet_user_interval_hourindex | \n 7169 | AccessShareLock | t\n 26612633 | ts_stats_transetgroup_user_daily_lastaggregatedrowindex | \n 7169 | AccessShareLock | t\n 26611806 | ts_stats_transetgroup_user_daily | \n 7169 | AccessShareLock | t\n 26612835 | ts_transetgroup_transets_map_transetgroupidindex | \n 7169 | AccessShareLock | t\n 26612534 | ts_stats_transet_user_interval_transetindex | \n 7169 | AccessShareLock | t\n 26612637 | ts_stats_transetgroup_user_daily_userindex | \n 7169 | AccessShareLock | t\n 26611350 | ts_transetgroup_transets_map | \n 7169 | AccessShareLock | t\n 26612636 | ts_stats_transetgroup_user_daily_transetgroupindex | \n 7169 | AccessShareLock | t\n 26612836 | ts_transetgroup_transets_map_transetidindex | \n 7169 | AccessShareLock | t\n 26612632 | ts_stats_transetgroup_user_daily_hourindex | \n 7169 | AccessShareLock | t\n 26612401 | ts_transetgroup_transets_map_pkey | \n 7169 | AccessShareLock | t\n 26612351 | ts_stats_transetgroup_user_daily_pkey | \n 7169 | AccessShareLock | t\n 26612536 | ts_stats_transet_user_interval_userindex | \n 7169 | AccessShareLock | t\n 26612537 | ts_stats_transet_user_interval_weekindex | \n 7169 | AccessShareLock | t\n 26612837 | ts_transetgroup_transets_map_transetincarnationindex | \n 7169 | AccessShareLock | t\n 26612533 | ts_stats_transet_user_interval_starttime | \n 7169 | AccessShareLock | t\n 26612535 | ts_stats_transet_user_interval_userincarnationidindex | \n 7169 | AccessShareLock | t\n 26612537 | ts_stats_transet_user_interval_weekindex | \n 7392 | AccessShareLock | t\n 26612634 | ts_stats_transetgroup_user_daily_monthindex | \n 7392 | AccessShareLock | t\n 26612632 | ts_stats_transetgroup_user_daily_hourindex | \n 7392 | AccessShareLock | t\n 26612530 | ts_stats_transet_user_interval_dayindex | \n 7392 | AccessShareLock | t\n 26612633 | ts_stats_transetgroup_user_daily_lastaggregatedrowindex | \n 7392 | AccessShareLock | t\n 26612533 | ts_stats_transet_user_interval_starttime | \n 7392 | AccessShareLock | t\n 26612637 | ts_stats_transetgroup_user_daily_userindex | \n 7392 | AccessShareLock | t\n 26612351 | ts_stats_transetgroup_user_daily_pkey | \n 7392 | AccessShareLock | t\n 26612534 | ts_stats_transet_user_interval_transetindex | \n 7392 | AccessShareLock | t\n 26612631 | ts_stats_transetgroup_user_daily_dayindex | \n 7392 | AccessShareLock | t\n 26612535 | ts_stats_transet_user_interval_userincarnationidindex | \n 7392 | AccessShareLock | t\n 26612636 | ts_stats_transetgroup_user_daily_transetgroupindex | \n 7392 | AccessShareLock | t\n 26612639 | ts_stats_transetgroup_user_daily_yearindex | \n 7392 | AccessShareLock | t\n 26611729 | ts_stats_transet_user_interval | \n 7392 | AccessShareLock | t\n 26612322 | ts_stats_transet_user_interval_pkey | \n 7392 | AccessShareLock | t\n 26612538 | ts_stats_transet_user_interval_yearindex | \n 7392 | AccessShareLock | t\n 26612837 | ts_transetgroup_transets_map_transetincarnationindex | \n 7392 | AccessShareLock | t\n 26612635 | ts_stats_transetgroup_user_daily_starttimeindex | \n 7392 | AccessShareLock | t\n 26612532 | ts_stats_transet_user_interval_monthindex | \n 7392 | AccessShareLock | t\n 26612835 | ts_transetgroup_transets_map_transetgroupidindex | \n 7392 | AccessShareLock | t\n 26612638 | ts_stats_transetgroup_user_daily_weekindex | \n 7392 | AccessShareLock | t\n 26612401 | ts_transetgroup_transets_map_pkey | \n 7392 | AccessShareLock | t\n 26612536 | ts_stats_transet_user_interval_userindex | \n 7392 | AccessShareLock | t\n 26612836 | ts_transetgroup_transets_map_transetidindex | \n 7392 | AccessShareLock | t\n 26611806 | ts_stats_transetgroup_user_daily | \n 7392 | AccessShareLock | t\n 26612531 | ts_stats_transet_user_interval_hourindex | \n 7392 | AccessShareLock | t\n 26611350 | ts_transetgroup_transets_map | \n 7392 | AccessShareLock | t\n 26612538 | ts_stats_transet_user_interval_yearindex | \n 7397 | AccessShareLock | t\n 26612537 | ts_stats_transet_user_interval_weekindex | \n 7397 | AccessShareLock | t\n 26612533 | ts_stats_transet_user_interval_starttime | \n 7397 | AccessShareLock | t\n 26612639 | ts_stats_transetgroup_user_daily_yearindex | \n 7397 | AccessShareLock | t\n 26612632 | ts_stats_transetgroup_user_daily_hourindex | \n 7397 | AccessShareLock | t\n 26612531 | ts_stats_transet_user_interval_hourindex | \n 7397 | AccessShareLock | t\n 26612835 | ts_transetgroup_transets_map_transetgroupidindex | \n 7397 | AccessShareLock | t\n 26611350 | ts_transetgroup_transets_map | \n 7397 | AccessShareLock | t\n 26612532 | ts_stats_transet_user_interval_monthindex | \n 7397 | AccessShareLock | t\n 26612836 | ts_transetgroup_transets_map_transetidindex | \n 7397 | AccessShareLock | t\n 26612322 | ts_stats_transet_user_interval_pkey | \n 7397 | AccessShareLock | t\n 26612535 | ts_stats_transet_user_interval_userincarnationidindex | \n 7397 | AccessShareLock | t\n 26612637 | ts_stats_transetgroup_user_daily_userindex | \n 7397 | AccessShareLock | t\n 26612631 | ts_stats_transetgroup_user_daily_dayindex | \n 7397 | AccessShareLock | t\n 26612634 | ts_stats_transetgroup_user_daily_monthindex | \n 7397 | AccessShareLock | t\n 26611729 | ts_stats_transet_user_interval | \n 7397 | AccessShareLock | t\n 26611806 | ts_stats_transetgroup_user_daily | \n 7397 | AccessShareLock | t\n 26612636 | ts_stats_transetgroup_user_daily_transetgroupindex | \n 7397 | AccessShareLock | t\n 26612351 | ts_stats_transetgroup_user_daily_pkey | \n 7397 | AccessShareLock | t\n 26612536 | ts_stats_transet_user_interval_userindex | \n 7397 | AccessShareLock | t\n 26612638 | ts_stats_transetgroup_user_daily_weekindex | \n 7397 | AccessShareLock | t\n 26612530 | ts_stats_transet_user_interval_dayindex | \n 7397 | AccessShareLock | t\n 26612837 | ts_transetgroup_transets_map_transetincarnationindex | \n 7397 | AccessShareLock | t\n 26612633 | ts_stats_transetgroup_user_daily_lastaggregatedrowindex | \n 7397 | AccessShareLock | t\n 26612401 | ts_transetgroup_transets_map_pkey | \n 7397 | AccessShareLock | t\n 26612534 | ts_stats_transet_user_interval_transetindex | \n 7397 | AccessShareLock | t\n 26612635 | ts_stats_transetgroup_user_daily_starttimeindex | \n 7397 | AccessShareLock | t\n 26612639 | ts_stats_transetgroup_user_daily_yearindex | \n 7403 | AccessShareLock | t\n 26612351 | ts_stats_transetgroup_user_daily_pkey | \n 7403 | AccessShareLock | t\n 26612536 | ts_stats_transet_user_interval_userindex | \n 7403 | AccessShareLock | t\n 26612634 | ts_stats_transetgroup_user_daily_monthindex | \n 7403 | AccessShareLock | t\n 26612635 | ts_stats_transetgroup_user_daily_starttimeindex | \n 7403 | AccessShareLock | t\n 26612538 | ts_stats_transet_user_interval_yearindex | \n 7403 | AccessShareLock | t\n 26612633 | ts_stats_transetgroup_user_daily_lastaggregatedrowindex | \n 7403 | AccessShareLock | t\n 26612530 | ts_stats_transet_user_interval_dayindex | \n 7403 | AccessShareLock | t\n 26612837 | ts_transetgroup_transets_map_transetincarnationindex | \n 7403 | AccessShareLock | t\n 26612532 | ts_stats_transet_user_interval_monthindex | \n 7403 | AccessShareLock | t\n 26612322 | ts_stats_transet_user_interval_pkey | \n 7403 | AccessShareLock | t\n 26611729 | ts_stats_transet_user_interval | \n 7403 | AccessShareLock | t\n 26612535 | ts_stats_transet_user_interval_userincarnationidindex | \n 7403 | AccessShareLock | t\n 26612637 | ts_stats_transetgroup_user_daily_userindex | \n 7403 | AccessShareLock | t\n 26612836 | ts_transetgroup_transets_map_transetidindex | \n 7403 | AccessShareLock | t\n 26611350 | ts_transetgroup_transets_map | \n 7403 | AccessShareLock | t\n 26612401 | ts_transetgroup_transets_map_pkey | \n 7403 | AccessShareLock | t\n 26612631 | ts_stats_transetgroup_user_daily_dayindex | \n 7403 | AccessShareLock | t\n 26612636 | ts_stats_transetgroup_user_daily_transetgroupindex | \n 7403 | AccessShareLock | t\n 26611806 | ts_stats_transetgroup_user_daily | \n 7403 | AccessShareLock | t\n 26612534 | ts_stats_transet_user_interval_transetindex | \n 7403 | AccessShareLock | t\n 26612835 | ts_transetgroup_transets_map_transetgroupidindex | \n 7403 | AccessShareLock | t\n 26612632 | ts_stats_transetgroup_user_daily_hourindex | \n 7403 | AccessShareLock | t\n 26612531 | ts_stats_transet_user_interval_hourindex | \n 7403 | AccessShareLock | t\n 26612537 | ts_stats_transet_user_interval_weekindex | \n 7403 | AccessShareLock | t\n 26612533 | ts_stats_transet_user_interval_starttime | \n 7403 | AccessShareLock | t\n 26612638 | ts_stats_transetgroup_user_daily_weekindex | \n 7403 | AccessShareLock | t\n 26611743 | ts_stats_transet_user_weekly | \n14031 | ShareUpdateExclusiveLock | t\n 26612553 | ts_stats_transet_user_weekly_starttimeindex | \n14031 | RowExclusiveLock | t\n 26612326 | ts_stats_transet_user_weekly_pkey | \n14031 | RowExclusiveLock | t\n 26612551 | ts_stats_transet_user_weekly_lastaggregatedrowindex | \n14031 | RowExclusiveLock | t\n 26612550 | ts_stats_transet_user_weekly_hourindex | \n14031 | RowExclusiveLock | t\n 26612552 | ts_stats_transet_user_weekly_monthindex | \n14031 | RowExclusiveLock | t\n 26612549 | ts_stats_transet_user_weekly_dayindex | \n14031 | RowExclusiveLock | t\n 26612558 | ts_stats_transet_user_weekly_yearindex | \n14031 | RowExclusiveLock | t\n 26612554 | ts_stats_transet_user_weekly_transetindex | \n14031 | RowExclusiveLock | t\n 26612556 | ts_stats_transet_user_weekly_userindex | \n14031 | RowExclusiveLock | t\n 26612557 | ts_stats_transet_user_weekly_weekindex | \n14031 | RowExclusiveLock | t\n 26612555 | ts_stats_transet_user_weekly_userincarnationidindex | \n14031 | RowExclusiveLock | t\n 10969 | pg_locks | \n22130 | AccessShareLock | t\n 1259 | pg_class | \n22130 | AccessShareLock | t\n 2662 | pg_class_oid_index | \n22130 | AccessShareLock | t\n 2663 | pg_class_relname_nsp_index | \n22130 | AccessShareLock | t\n 26612659 | ts_stats_transetgroup_user_weekly_hourindex | \n32169 | AccessShareLock | t\n 26612658 | ts_stats_transetgroup_user_weekly_dayindex | \n32169 | AccessShareLock | t\n 26612666 | ts_stats_transetgroup_user_weekly_yearindex | \n32169 | AccessShareLock | t\n 26612665 | ts_stats_transetgroup_user_weekly_weekindex | \n32169 | AccessShareLock | t\n 26612661 | ts_stats_transetgroup_user_weekly_monthindex | \n32169 | AccessShareLock | t\n 26612663 | ts_stats_transetgroup_user_weekly_transetgroupindex | \n32169 | AccessShareLock | t\n 26612662 | ts_stats_transetgroup_user_weekly_starttimeindex | \n32169 | AccessShareLock | t\n 26611827 | ts_stats_transetgroup_user_weekly | \n32169 | ShareUpdateExclusiveLock | t\n 26612660 | ts_stats_transetgroup_user_weekly_lastaggregatedrowindex | \n32169 | AccessShareLock | t\n 26612664 | ts_stats_transetgroup_user_weekly_userindex | \n32169 | AccessShareLock | t\n 26612357 | ts_stats_transetgroup_user_weekly_pkey | \n32169 | AccessShareLock | t\n 27208304 | ts_stats_transet_user_daily_starttimeindex | \n32571 | RowExclusiveLock | t\n 26612320 | ts_stats_transet_user_daily_pkey | \n32571 | RowExclusiveLock | t\n 27208300 | ts_stats_transet_user_daily_dayindex | \n32571 | RowExclusiveLock | t\n 27208305 | ts_stats_transet_user_daily_transetincarnationidindex | \n32571 | RowExclusiveLock | t\n 27208302 | ts_stats_transet_user_daily_lastaggregatedrowindex | \n32571 | RowExclusiveLock | t\n 27208310 | ts_stats_transet_user_daily_yearindex | \n32571 | RowExclusiveLock | t\n 27208309 | ts_stats_transet_user_daily_weekindex | \n32571 | RowExclusiveLock | t\n 27208307 | ts_stats_transet_user_daily_userincarnationidindex | \n32571 | RowExclusiveLock | t\n 26611722 | ts_stats_transet_user_daily | \n32571 | ShareUpdateExclusiveLock | t\n 27208301 | ts_stats_transet_user_daily_hourindex | \n32571 | RowExclusiveLock | t\n 27208306 | ts_stats_transet_user_daily_transetindex | \n32571 | RowExclusiveLock | t\n 27208303 | ts_stats_transet_user_daily_monthindex | \n32571 | RowExclusiveLock | t\n 27208308 | ts_stats_transet_user_daily_userindex | \n32571 | RowExclusiveLock | t\n(148 rows)\n\n[root@rdl64xeoserv01 log]# strace -p 7397\nProcess 7397 attached - interrupt to quit\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\nbrk(0x8603000) = 0x8603000\nbrk(0x85fb000) = 0x85fb000\n_llseek(144, 0, [292618240], SEEK_END) = 0\nbrk(0x85eb000) = 0x85eb000\n_llseek(65, 897179648, [897179648], SEEK_SET) = 0\nread(65, \"\\276\\2\\0\\0\\320\\337\\275\\315\\1\\0\\0\\0|\\3`\\22\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\n_llseek(65, 471457792, [471457792], SEEK_SET) = 0\nread(65, \"\\276\\2\\0\\0\\320\\337\\275\\315\\1\\0\\0\\0t\\6\\200\\6\\360\\37\\4 \\0\"..., \n8192) = 8192\nread(65, \"\\276\\2\\0\\0\\354\\271\\355\\312\\1\\0\\0\\0\\374\\5`\\10\\360\\37\\4 \"..., \n8192) = 8192\nread(65, \"\\0\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\324\\5\\0\\t\\360\\37\\4 \\0\\0\\0\\0\\0\"..., \n8192) = 8192\nbrk(0x8613000) = 0x8613000\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95515000\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95494000\nmmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n-1, 0) = 0x95393000\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\nbrk(0x8603000) = 0x8603000\nbrk(0x85fb000) = 0x85fb000\n_llseek(144, 0, [292618240], SEEK_END) = 0\nbrk(0x85eb000) = 0x85eb000\n_llseek(65, 86941696, [86941696], SEEK_SET) = 0\nread(65, \"\\304\\2\\0\\0\\30\\333\\v\\244\\1\\0\\0\\0\\324\\5\\0\\t\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\n_llseek(65, 892682240, [892682240], SEEK_SET) = 0\nread(65, \"\\276\\2\\0\\0\\fq\\231\\310\\1\\0\\0\\0|\\3`\\22\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\n_llseek(65, 86949888, [86949888], SEEK_SET) = 0\nread(65, \"\\276\\2\\0\\0\\fq\\231\\310\\1\\0\\0\\0t\\6\\200\\6\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(65, \"\\276\\2\\0\\0\\270*\\341\\305\\1\\0\\0\\0\\364\\5\\200\\10\\360\\37\\4 \"..., \n8192) = 8192\nread(65, \"\\0\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\324\\5\\0\\t\\360\\37\\4 \\0\\0\\0\\0\\0\"..., \n8192) = 8192\nread(65, \"\\304\\2\\0\\0\\320`\\225\\242\\1\\0\\0\\0000\\4\\220\\17\\360\\37\\4 \\0\"..., \n8192) = 8192\nbrk(0x8613000) = 0x8613000\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95515000\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95494000\nmmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n-1, 0) = 0x95393000\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\nbrk(0x8603000) = 0x8603000\nbrk(0x85fb000) = 0x85fb000\n_llseek(144, 0, [292618240], SEEK_END) = 0\nbrk(0x85eb000) = 0x85eb000\nbrk(0x8613000) = 0x8613000\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95515000\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95494000\nmmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n-1, 0) = 0x95393000\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\nbrk(0x8603000) = 0x8603000\nbrk(0x85fb000) = 0x85fb000\n_llseek(144, 0, [292618240], SEEK_END) = 0\nbrk(0x85eb000) = 0x85eb000\n_llseek(65, 115359744, [115359744], SEEK_SET) = 0\nread(65, \"\\304\\2\\0\\0\\200e\\331\\243\\1\\0\\0\\0`\\6\\320\\6\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\n_llseek(65, 827695104, [827695104], SEEK_SET) = 0\nread(65, \"\\273\\2\\0\\0\\20\\36uY\\1\\0\\0\\0|\\3`\\22\\360\\37\\4 \\0\\0\\0\\0\\340\"..., \n8192) = 8192\n_llseek(65, 115367936, [115367936], SEEK_SET) = 0\nread(65, \"\\273\\2\\0\\0\\20\\36uY\\1\\0\\0\\0\\10\\0060\\10\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(65, \"\\0\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\324\\5\\0\\t\\360\\37\\4 \\0\\0\\0\\0\\0\"..., \n8192) = 8192\nread(65, \"\\0\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\324\\5\\0\\t\\360\\37\\4 \\0\\0\\0\\0\\0\"..., \n8192) = 8192\nread(65, \"\\304\\2\\0\\0\\344M>\\242\\1\\0\\0\\0L\\3 \\23\\360\\37\\4 \\0\\0\\0\\0\\340\"..., \n8192) = 8192\nbrk(0x8613000) = 0x8613000\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95515000\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95494000\nmmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n-1, 0) = 0x95393000\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\n\nstrace output from the other 3 postgres processes looks the same.\n", "msg_date": "Wed, 17 Jun 2009 11:35:48 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "very slow selects on a small table" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> [root@rdl64xeoserv01 log]# strace -p 7397\n> Process 7397 attached - interrupt to quit\n> munmap(0x95393000, 1052672) = 0\n> munmap(0x95494000, 528384) = 0\n> munmap(0x95515000, 266240) = 0\n> brk(0x8603000) = 0x8603000\n> brk(0x85fb000) = 0x85fb000\n> _llseek(144, 0, [292618240], SEEK_END) = 0\n> brk(0x85eb000) = 0x85eb000\n> _llseek(65, 897179648, [897179648], SEEK_SET) = 0\n> read(65, \"\\276\\2\\0\\0\\320\\337\\275\\315\\1\\0\\0\\0|\\3`\\22\\360\\37\\4 \\0\\0\"..., \n> 8192) = 8192\n> _llseek(65, 471457792, [471457792], SEEK_SET) = 0\n> read(65, \"\\276\\2\\0\\0\\320\\337\\275\\315\\1\\0\\0\\0t\\6\\200\\6\\360\\37\\4 \\0\"..., \n> 8192) = 8192\n> read(65, \"\\276\\2\\0\\0\\354\\271\\355\\312\\1\\0\\0\\0\\374\\5`\\10\\360\\37\\4 \"..., \n> 8192) = 8192\n> read(65, \"\\0\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\324\\5\\0\\t\\360\\37\\4 \\0\\0\\0\\0\\0\"..., \n> 8192) = 8192\n> brk(0x8613000) = 0x8613000\n> mmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n> 0) = 0x95515000\n> mmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n> 0) = 0x95494000\n> mmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n> -1, 0) = 0x95393000\n> munmap(0x95393000, 1052672) = 0\n> munmap(0x95494000, 528384) = 0\n> munmap(0x95515000, 266240) = 0\n> [ lather, rinse, repeat ]\n\nThat is a pretty odd trace for a Postgres backend; apparently it's\nrepeatedly acquiring and releasing a meg or two worth of memory, which\nis not very normal within a single query. Can you tell us more about\nthe query it's running? An EXPLAIN plan would be particularly\ninteresting. Also, could you try to determine which files 144 and 65\nrefer to (see lsof)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 18:29:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow selects on a small table " } ]
[ { "msg_contents": "I have a column which only has six states or values.\n\nIs there a size advantage to using an enum for this data type?\nCurrently I have it defined as a character(1).\n\nThis table has about 600 million rows, so it could wind up making a\ndifference in total size.\n\nThanks,\nWhit\n", "msg_date": "Wed, 17 Jun 2009 18:06:06 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "enum for performance?" }, { "msg_contents": "Whit Armstrong <[email protected]> writes:\n> I have a column which only has six states or values.\n> Is there a size advantage to using an enum for this data type?\n> Currently I have it defined as a character(1).\n\nNope. enums are always 4 bytes. char(1) is going to take 2 bytes\n(assuming those six values are simple ASCII characters), at least\nas of PG 8.3 or later.\n\nDepending on what the adjacent columns are, the enum might not actually\ncost you anything --- the difference might well disappear into alignment\npadding anyway. But it's not going to save.\n\nAnother possibility is to look at the \"char\" (not char) type, which also\nstores single ASCII-only characters. That's just one byte. But again,\nit might well not save you anything, depending on alignment\nconsiderations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 18:12:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: enum for performance? " }, { "msg_contents": "On Wed, Jun 17, 2009 at 6:06 PM, Whit Armstrong<[email protected]> wrote:\n> I have a column which only has six states or values.\n>\n> Is there a size advantage to using an enum for this data type?\n> Currently I have it defined as a character(1).\n>\n> This table has about 600 million rows, so it could wind up making a\n> difference in total size.\n\nHere is what enums get you:\n*) You can skip a join to a detail table if one char is not enough to\nsufficiently describe the value to the user.\n*) If you need to order by the whats contained in the enum, the gains\ncan be tremendous because it can be inlined in the index:\n\ncreate table bigtable\n(\n company_id bigint,\n someval some_enum_t,\n sometime timestamptz,\n);\n\ncreate index bigindex on bigtable(company_id, someval, sometime);\n\nselect * from bigtable order by 1,2,3 limit 50;\n-- or\nselect * from bigtable where company_id = 12345 order by 2,3;\n\nThe disadvantage with enums is flexibility. Sometimes the performance\ndoesn't matter or you need that detail table anyways for other\nreasons.\n\nAlso, if you use \"char\" vs char(1), you shave a byte and a tiny bit of speed.\n\nmerlin\n", "msg_date": "Thu, 18 Jun 2009 11:19:33 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: enum for performance?" } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> That is a pretty odd trace for a Postgres backend; apparently it's\n> repeatedly acquiring and releasing a meg or two worth of memory, which\n> is not very normal within a single query. Can you tell us more about\n> the query it's running? An EXPLAIN plan would be particularly\n> interesting. Also, could you try to determine which files 144 and 65\n> refer to (see lsof)?\n\nHere's the explain and a current strace and lsof. The strace shows even \nless I/O activity.\n\nThanks,\nBrian\n\n\ncemdb=# explain select * from ts_stats_transetgroup_user_daily a where \na.ts_id in (select b.ts_id from ts_stats_transetgroup_user_daily \nb,ts_stats_transet_user_interval c, ts_transetgroup_transets_map m where \nb.ts_transet_group_id = m.ts_transet_group_id and \nm.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \nc.ts_user_incarnation_id = b.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2009-6-16 01:00' and \nc.ts_interval_start_time < '2009-6-16 02:00');\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=128162.63..211221.43 rows=231127 width=779)\n Hash Cond: (a.ts_id = b.ts_id)\n -> Seq Scan on ts_stats_transetgroup_user_daily a \n(cost=0.00..80511.26 rows=247451 width=779)\n -> Hash (cost=126718.09..126718.09 rows=231127 width=8)\n -> Hash Join (cost=82370.45..126718.09 rows=231127 width=8)\n Hash Cond: ((m.ts_transet_group_id = \nb.ts_transet_group_id) AND (c.ts_user_incarnation_id = \nb.ts_user_incarnation_id))\n -> Hash Join (cost=3.32..27316.61 rows=211716 width=16)\n Hash Cond: (c.ts_transet_incarnation_id = \nm.ts_transet_incarnation_id)\n -> Index Scan using \nts_stats_transet_user_interval_starttime on \nts_stats_transet_user_interval c (cost=0.00..25857.75 rows=211716 width=16)\n Index Cond: ((ts_interval_start_time >= \n'2009-06-16 01:00:00-07'::timestamp with time zone) AND \n(ts_interval_start_time < '2009-06-16 02:00:00-07'::timestamp with time \nzone))\n -> Hash (cost=2.58..2.58 rows=117 width=16)\n -> Seq Scan on ts_transetgroup_transets_map \nm (cost=0.00..2.58 rows=117 width=16)\n -> Hash (cost=80511.26..80511.26 rows=247451 width=24)\n -> Seq Scan on ts_stats_transetgroup_user_daily b \n (cost=0.00..80511.26 rows=247451 width=24)\n(14 rows)\n\n\n[root@rdl64xeoserv01 log]# strace -p 7397\nProcess 7397 attached - interrupt to quit\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95515000\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95494000\nmmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n-1, 0) = 0x95393000\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\nbrk(0x8603000) = 0x8603000\nbrk(0x85fb000) = 0x85fb000\n_llseek(164, 0, [201940992], SEEK_END) = 0\nbrk(0x85eb000) = 0x85eb000\nbrk(0x8613000) = 0x8613000\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95515000\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95494000\nmmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n-1, 0) = 0x95393000\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\nbrk(0x8603000) = 0x8603000\nbrk(0x85fb000) = 0x85fb000\n_llseek(164, 0, [201940992], SEEK_END) = 0\nbrk(0x85eb000) = 0x85eb000\nbrk(0x8613000) = 0x8613000\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95515000\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95494000\nmmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n-1, 0) = 0x95393000\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\nbrk(0x8603000) = 0x8603000\nbrk(0x85fb000) = 0x85fb000\n_llseek(164, 0, [201940992], SEEK_END) = 0\nbrk(0x85eb000) = 0x85eb000\nbrk(0x8613000) = 0x8613000\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95515000\nmmap2(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, \n0) = 0x95494000\nmmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \n-1, 0) = 0x95393000\nmunmap(0x95393000, 1052672) = 0\nmunmap(0x95494000, 528384) = 0\nmunmap(0x95515000, 266240) = 0\nbrk(0x8603000) = 0x8603000\nbrk(0x85fb000) = 0x85fb000\n_llseek(164, 0, [201940992], SEEK_END) = 0\n\n\n[root@rdl64xeoserv01 log]# lsof -p 7397\nCOMMAND PID USER FD TYPE DEVICE SIZE NODE NAME\npostmaste 7397 postgres cwd DIR 8,2 4096 12681488 \n/var/lib/pgsql/data\npostmaste 7397 postgres rtd DIR 8,2 4096 2 /\npostmaste 7397 postgres txt REG 8,2 3304132 3514609 \n/usr/bin/postgres\npostmaste 7397 postgres mem REG 8,2 45889 4718642 \n/lib/libnss_files-2.3.4.so\npostmaste 7397 postgres mem REG 8,2 136016 3510667 \n/usr/lib/libk5crypto.so.3.0\npostmaste 7397 postgres mem REG 8,2 202976 3514362 \n/usr/lib/libldap-2.2.so.7.0.6\npostmaste 7397 postgres mem REG 8,2 48548 3514621 \n/usr/lib/liblber-2.2.so.7.0.6\npostmaste 7397 postgres mem REG 8,2 82320 3514622 \n/usr/lib/libsasl2.so.2.0.19\npostmaste 7397 postgres mem REG 8,2 106397 4719545 \n/lib/ld-2.3.4.so\npostmaste 7397 postgres mem REG 8,2 7004 4719556 \n/lib/libcom_err.so.2.1\npostmaste 7397 postgres mem REG 8,2 1454802 4719546 \n/lib/tls/libc-2.3.4.so\npostmaste 7397 postgres mem REG 8,2 15324 4719549 \n/lib/libdl-2.3.4.so\npostmaste 7397 postgres mem REG 8,2 178019 4719550 \n/lib/tls/libm-2.3.4.so\npostmaste 7397 postgres mem REG 8,2 63624 3512751 \n/usr/lib/libz.so.1.2.1.2\npostmaste 7397 postgres mem REG 8,2 27191 4719553 \n/lib/libcrypt-2.3.4.so\npostmaste 7397 postgres mem REG 8,2 79488 4719551 \n/lib/libresolv-2.3.4.so\npostmaste 7397 postgres mem REG 8,2 32024 4718594 \n/lib/libpam.so.0.77\npostmaste 7397 postgres mem REG 8,2 60116 4718721 \n/lib/libaudit.so.0.0.0\npostmaste 7397 postgres mem REG 8,2 82944 3514353 \n/usr/lib/libgssapi_krb5.so.2.2\npostmaste 7397 postgres mem REG 8,2 415188 3514352 \n/usr/lib/libkrb5.so.3.2\npostmaste 7397 postgres mem REG 8,2 213600 24708638 \n/opt/PostgreSQL/8.3/lib/libssl.so.0.9.7a\npostmaste 7397 postgres mem REG 8,2 945120 24708627 \n/opt/PostgreSQL/8.3/lib/libcrypto.so.0.9.7a\npostmaste 7397 postgres DEL REG 0,6 5177344 \n/SYSV0052e2c1\npostmaste 7397 postgres mem REG 8,2 48533072 3509415 \n/usr/lib/locale/locale-archive\npostmaste 7397 postgres 0r CHR 1,3 1969 /dev/null\npostmaste 7397 postgres 1w FIFO 0,7 9913597 pipe\npostmaste 7397 postgres 2w FIFO 0,7 9913597 pipe\npostmaste 7397 postgres 3u REG 8,2 368640 12812420 \n/var/lib/pgsql/data/base/26611279/1259\npostmaste 7397 postgres 4u REG 8,2 8192 12812368 \n/var/lib/pgsql/data/base/26611279/2601\npostmaste 7397 postgres 5u REG 8,2 8192 12812341 \n/var/lib/pgsql/data/base/26611279/2615\npostmaste 7397 postgres 6u REG 8,2 16384 12812326 \n/var/lib/pgsql/data/base/26611279/2684\npostmaste 7397 postgres 7u IPv4 9913600 UDP \nlocalhost.localdomain:35057->localhost.localdomain:35057\npostmaste 7397 postgres 8u IPv4 10045960 TCP \nlocalhost.localdomain:postgres->localhost.localdomain:53377 (ESTABLISHED)\npostmaste 7397 postgres 9u REG 8,2 16384 12812366 \n/var/lib/pgsql/data/base/26611279/2685\npostmaste 7397 postgres 10u REG 8,2 253952 12812354 \n/var/lib/pgsql/data/base/26611279/2662\npostmaste 7397 postgres 11u REG 8,2 286720 12812359 \n/var/lib/pgsql/data/base/26611279/2663\npostmaste 7397 postgres 12u REG 8,2 131072 12812365 \n/var/lib/pgsql/data/base/26611279/2609\npostmaste 7397 postgres 13u REG 8,2 98304 12812430 \n/var/lib/pgsql/data/base/26611279/2675\npostmaste 7397 postgres 14u REG 8,2 2097152 12812406 \n/var/lib/pgsql/data/base/26611279/1249\npostmaste 7397 postgres 15u REG 8,2 1130496 12812363 \n/var/lib/pgsql/data/base/26611279/2659\npostmaste 7397 postgres 16u REG 8,2 122880 12812373 \n/var/lib/pgsql/data/base/26611279/2604\npostmaste 7397 postgres 17u REG 8,2 16384 12813349 \n/var/lib/pgsql/data/base/26611279/27241082\npostmaste 7397 postgres 18u REG 8,2 16777216 26427412 \n/var/lib/pgsql/data/pg_xlog/00000001000002C30000001E (deleted)\npostmaste 7397 postgres 19u REG 8,2 16384 12813350 \n/var/lib/pgsql/data/base/26611279/27241083\npostmaste 7397 postgres 20u REG 8,2 16384 12813351 \n/var/lib/pgsql/data/base/26611279/27241084\npostmaste 7397 postgres 21u REG 8,2 16384 12813352 \n/var/lib/pgsql/data/base/26611279/27241085\npostmaste 7397 postgres 22u REG 8,2 16384 12813353 \n/var/lib/pgsql/data/base/26611279/27241086\npostmaste 7397 postgres 23u REG 8,2 16384 12813354 \n/var/lib/pgsql/data/base/26611279/27241087\npostmaste 7397 postgres 24u REG 8,2 16384 12813355 \n/var/lib/pgsql/data/base/26611279/27241088\npostmaste 7397 postgres 25u REG 8,2 16384 12813356 \n/var/lib/pgsql/data/base/26611279/27241089\npostmaste 7397 postgres 26u REG 8,2 16384 12813357 \n/var/lib/pgsql/data/base/26611279/27241090\npostmaste 7397 postgres 27u REG 8,2 16384 12813358 \n/var/lib/pgsql/data/base/26611279/27241091\npostmaste 7397 postgres 28u REG 8,2 1073741824 12813640 \n/var/lib/pgsql/data/base/26611279/27236833\npostmaste 7397 postgres 29u REG 8,2 1073741824 12812645 \n/var/lib/pgsql/data/base/26611279/27236833.1\npostmaste 7397 postgres 30u REG 8,2 1073741824 12812681 \n/var/lib/pgsql/data/base/26611279/27236833.2\npostmaste 7397 postgres 31u REG 8,2 1073741824 24133635 \n/var/lib/pgsql/data/base/26611279/27236833.3\npostmaste 7397 postgres 32u REG 8,2 1073741824 24133636 \n/var/lib/pgsql/data/base/26611279/27236833.4\npostmaste 7397 postgres 33u REG 8,2 1073741824 24133664 \n/var/lib/pgsql/data/base/26611279/27236833.5\npostmaste 7397 postgres 34u REG 8,2 1073741824 12813227 \n/var/lib/pgsql/data/base/26611279/27236833.6\npostmaste 7397 postgres 35u REG 8,2 1073741824 12812305 \n/var/lib/pgsql/data/base/26611279/27236833.7\npostmaste 7397 postgres 36u REG 8,2 1073741824 24133637 \n/var/lib/pgsql/data/base/26611279/27236833.8\npostmaste 7397 postgres 37u REG 8,2 1073741824 24133634 \n/var/lib/pgsql/data/base/26611279/27236833.9\npostmaste 7397 postgres 38u REG 8,2 1073741824 24133639 \n/var/lib/pgsql/data/base/26611279/27236833.10\npostmaste 7397 postgres 39u REG 8,2 1073741824 24133640 \n/var/lib/pgsql/data/base/26611279/27236833.11\npostmaste 7397 postgres 40u REG 8,2 1073741824 24133641 \n/var/lib/pgsql/data/base/26611279/27236833.12\npostmaste 7397 postgres 41u REG 8,2 1073741824 24133643 \n/var/lib/pgsql/data/base/26611279/27236833.13\npostmaste 7397 postgres 42u REG 8,2 1073741824 24133646 \n/var/lib/pgsql/data/base/26611279/27236833.14\npostmaste 7397 postgres 43u REG 8,2 1073741824 24133652 \n/var/lib/pgsql/data/base/26611279/27236833.15\npostmaste 7397 postgres 44u REG 8,2 1073741824 24133654 \n/var/lib/pgsql/data/base/26611279/27236833.16\npostmaste 7397 postgres 45u REG 8,2 1073741824 28295169 \n/var/lib/pgsql/data/base/26611279/27236833.17\npostmaste 7397 postgres 46u REG 8,2 1073741824 32489473 \n/var/lib/pgsql/data/base/26611279/27236833.18\npostmaste 7397 postgres 47u REG 8,2 1073741824 12812808 \n/var/lib/pgsql/data/base/26611279/27236833.19\npostmaste 7397 postgres 48u REG 8,2 1073741824 12812811 \n/var/lib/pgsql/data/base/26611279/27236833.20\npostmaste 7397 postgres 49u REG 8,2 1073741824 12812812 \n/var/lib/pgsql/data/base/26611279/27236833.21\npostmaste 7397 postgres 50u REG 8,2 1073741824 32489475 \n/var/lib/pgsql/data/base/26611279/27236833.22\npostmaste 7397 postgres 51u REG 8,2 1073741824 32489487 \n/var/lib/pgsql/data/base/26611279/27236833.23\npostmaste 7397 postgres 52u REG 8,2 1073741824 32489488 \n/var/lib/pgsql/data/base/26611279/27236833.24\npostmaste 7397 postgres 53u REG 8,2 1073741824 12812838 \n/var/lib/pgsql/data/base/26611279/27236833.25\npostmaste 7397 postgres 54u REG 8,2 1073741824 32489486 \n/var/lib/pgsql/data/base/26611279/27236833.26\npostmaste 7397 postgres 55u REG 8,2 1073741824 32489490 \n/var/lib/pgsql/data/base/26611279/27236833.27\npostmaste 7397 postgres 56u REG 8,2 1073741824 32489476 \n/var/lib/pgsql/data/base/26611279/27236833.28\npostmaste 7397 postgres 57u REG 8,2 1073741824 32489477 \n/var/lib/pgsql/data/base/26611279/27236833.29\npostmaste 7397 postgres 58u REG 8,2 1073741824 12812862 \n/var/lib/pgsql/data/base/26611279/27236833.30\npostmaste 7397 postgres 59u REG 8,2 977092608 32489492 \n/var/lib/pgsql/data/base/26611279/27241806\npostmaste 7397 postgres 60u REG 8,2 824860672 32489493 \n/var/lib/pgsql/data/base/26611279/27241838\npostmaste 7397 postgres 61u REG 8,2 824303616 32489494 \n/var/lib/pgsql/data/base/26611279/27241839\npostmaste 7397 postgres 62u REG 8,2 824770560 32489495 \n/var/lib/pgsql/data/base/26611279/27241840\npostmaste 7397 postgres 63u REG 8,2 1024270336 32489496 \n/var/lib/pgsql/data/base/26611279/27241841\npostmaste 7397 postgres 64u REG 8,2 1024016384 32489497 \n/var/lib/pgsql/data/base/26611279/27241842\npostmaste 7397 postgres 65u REG 8,2 1073741824 32489498 \n/var/lib/pgsql/data/base/26611279/27241843\npostmaste 7397 postgres 66u REG 8,2 1073741824 32489499 \n/var/lib/pgsql/data/base/26611279/27241844\npostmaste 7397 postgres 67u REG 8,2 824172544 32489500 \n/var/lib/pgsql/data/base/26611279/27241845\npostmaste 7397 postgres 68u REG 8,2 824639488 32489501 \n/var/lib/pgsql/data/base/26611279/27241846\npostmaste 7397 postgres 69u REG 8,2 16384 12812471 \n/var/lib/pgsql/data/base/26611279/26611350\npostmaste 7397 postgres 70u REG 8,2 155648 12812401 \n/var/lib/pgsql/data/base/26611279/2610\npostmaste 7397 postgres 71u REG 8,2 16384 12812744 \n/var/lib/pgsql/data/base/26611279/26612401\npostmaste 7397 postgres 72u REG 8,2 16384 12813163 \n/var/lib/pgsql/data/base/26611279/26612835\npostmaste 7397 postgres 73u REG 8,2 16384 12813164 \n/var/lib/pgsql/data/base/26611279/26612836\npostmaste 7397 postgres 74u REG 8,2 16384 12813165 \n/var/lib/pgsql/data/base/26611279/26612837\npostmaste 7397 postgres 75u REG 8,2 352256 12812314 \n/var/lib/pgsql/data/base/26611279/2696\npostmaste 7397 postgres 76u REG 8,2 9469952 12812377 \n/var/lib/pgsql/data/base/26611279/2619\npostmaste 7397 postgres 77u REG 8,2 8192 12813369 \n/var/lib/pgsql/data/base/26611279/27241102\npostmaste 7397 postgres 78u REG 8,2 16384 12813370 \n/var/lib/pgsql/data/base/26611279/27241103\npostmaste 7397 postgres 79u REG 8,2 16384 12813371 \n/var/lib/pgsql/data/base/26611279/27241104\npostmaste 7397 postgres 80u REG 8,2 16384 12813372 \n/var/lib/pgsql/data/base/26611279/27241105\npostmaste 7397 postgres 81u REG 8,2 16384 12813373 \n/var/lib/pgsql/data/base/26611279/27241106\npostmaste 7397 postgres 82u REG 8,2 16384 12813374 \n/var/lib/pgsql/data/base/26611279/27241107\npostmaste 7397 postgres 83u REG 8,2 16384 12813375 \n/var/lib/pgsql/data/base/26611279/27241108\npostmaste 7397 postgres 84u REG 8,2 16384 12813376 \n/var/lib/pgsql/data/base/26611279/27241109\npostmaste 7397 postgres 85u REG 8,2 16384 12813377 \n/var/lib/pgsql/data/base/26611279/27241110\npostmaste 7397 postgres 86u REG 8,2 16384 12813378 \n/var/lib/pgsql/data/base/26611279/27241111\npostmaste 7397 postgres 87u REG 8,2 8192 12813401 \n/var/lib/pgsql/data/base/26611279/27241134\npostmaste 7397 postgres 88u REG 8,2 16384 12813402 \n/var/lib/pgsql/data/base/26611279/27241135\npostmaste 7397 postgres 89u REG 8,2 16384 12813403 \n/var/lib/pgsql/data/base/26611279/27241136\npostmaste 7397 postgres 90u REG 8,2 16384 12813404 \n/var/lib/pgsql/data/base/26611279/27241137\npostmaste 7397 postgres 91u REG 8,2 16384 12813405 \n/var/lib/pgsql/data/base/26611279/27241138\npostmaste 7397 postgres 92u REG 8,2 16384 12813406 \n/var/lib/pgsql/data/base/26611279/27241139\npostmaste 7397 postgres 93u REG 8,2 16384 12813407 \n/var/lib/pgsql/data/base/26611279/27241140\npostmaste 7397 postgres 94u REG 8,2 16384 12813408 \n/var/lib/pgsql/data/base/26611279/27241141\npostmaste 7397 postgres 95u REG 8,2 16384 12813409 \n/var/lib/pgsql/data/base/26611279/27241142\npostmaste 7397 postgres 96u REG 8,2 16384 12813410 \n/var/lib/pgsql/data/base/26611279/27241143\npostmaste 7397 postgres 97u REG 8,2 16384 12813411 \n/var/lib/pgsql/data/base/26611279/27241144\npostmaste 7397 postgres 98u REG 8,2 32768 12813451 \n/var/lib/pgsql/data/base/26611279/26630701\npostmaste 7397 postgres 99u REG 8,2 32768 24133805 \n/var/lib/pgsql/data/base/26611279/27172990\npostmaste 7397 postgres 100u REG 8,2 32768 12812473 \n/var/lib/pgsql/data/base/26611279/27172991\npostmaste 7397 postgres 101u REG 8,2 32768 12812504 \n/var/lib/pgsql/data/base/26611279/27172993\npostmaste 7397 postgres 102u REG 8,2 32768 12812474 \n/var/lib/pgsql/data/base/26611279/27172992\npostmaste 7397 postgres 103u REG 8,2 716111872 12813434 \n/var/lib/pgsql/data/base/26611279/27241167\npostmaste 7397 postgres 104u REG 8,2 7602176 12813438 \n/var/lib/pgsql/data/base/26611279/27241168\npostmaste 7397 postgres 105u REG 8,2 24600576 12813441 \n/var/lib/pgsql/data/base/26611279/27241169\npostmaste 7397 postgres 106u REG 8,2 24502272 12813442 \n/var/lib/pgsql/data/base/26611279/27241170\npostmaste 7397 postgres 107u REG 8,2 30048256 12813443 \n/var/lib/pgsql/data/base/26611279/27241171\npostmaste 7397 postgres 108u REG 8,2 24559616 12813444 \n/var/lib/pgsql/data/base/26611279/27241172\npostmaste 7397 postgres 109u REG 8,2 29745152 12813445 \n/var/lib/pgsql/data/base/26611279/27241173\npostmaste 7397 postgres 110u REG 8,2 29851648 12813446 \n/var/lib/pgsql/data/base/26611279/27241174\npostmaste 7397 postgres 111u REG 8,2 27992064 12813447 \n/var/lib/pgsql/data/base/26611279/27241175\npostmaste 7397 postgres 112u REG 8,2 24674304 12813448 \n/var/lib/pgsql/data/base/26611279/27241176\npostmaste 7397 postgres 113u REG 8,2 24690688 12813449 \n/var/lib/pgsql/data/base/26611279/27241177\npostmaste 7397 postgres 114u REG 8,2 763609088 12813423 \n/var/lib/pgsql/data/base/26611279/27241156\npostmaste 7397 postgres 115u REG 8,2 4513792 12813424 \n/var/lib/pgsql/data/base/26611279/27241157\npostmaste 7397 postgres 116u REG 8,2 24756224 12813425 \n/var/lib/pgsql/data/base/26611279/27241158\npostmaste 7397 postgres 117u REG 8,2 24387584 12813426 \n/var/lib/pgsql/data/base/26611279/27241159\npostmaste 7397 postgres 118u REG 8,2 29843456 12813427 \n/var/lib/pgsql/data/base/26611279/27241160\npostmaste 7397 postgres 119u REG 8,2 24641536 12813428 \n/var/lib/pgsql/data/base/26611279/27241161\npostmaste 7397 postgres 120u REG 8,2 30130176 12813429 \n/var/lib/pgsql/data/base/26611279/27241162\npostmaste 7397 postgres 121u REG 8,2 30023680 12813430 \n/var/lib/pgsql/data/base/26611279/27241163\npostmaste 7397 postgres 122u REG 8,2 29220864 12813431 \n/var/lib/pgsql/data/base/26611279/27241164\npostmaste 7397 postgres 123u REG 8,2 24592384 12813432 \n/var/lib/pgsql/data/base/26611279/27241165\npostmaste 7397 postgres 124u REG 8,2 24674304 12813433 \n/var/lib/pgsql/data/base/26611279/27241166\npostmaste 7397 postgres 125u REG 8,2 1073741824 12812865 \n/var/lib/pgsql/data/base/26611279/27236833.31\npostmaste 7397 postgres 126u REG 8,2 649412608 12813412 \n/var/lib/pgsql/data/base/26611279/27241145\npostmaste 7397 postgres 127u REG 8,2 18046976 12813413 \n/var/lib/pgsql/data/base/26611279/27241146\npostmaste 7397 postgres 128u REG 8,2 23617536 12813414 \n/var/lib/pgsql/data/base/26611279/27241147\npostmaste 7397 postgres 129u REG 8,2 23568384 12813415 \n/var/lib/pgsql/data/base/26611279/27241148\npostmaste 7397 postgres 130u REG 8,2 28614656 12813416 \n/var/lib/pgsql/data/base/26611279/27241149\npostmaste 7397 postgres 131u REG 8,2 23306240 12813417 \n/var/lib/pgsql/data/base/26611279/27241150\npostmaste 7397 postgres 132u REG 8,2 28483584 12813418 \n/var/lib/pgsql/data/base/26611279/27241151\npostmaste 7397 postgres 133u REG 8,2 28499968 12813419 \n/var/lib/pgsql/data/base/26611279/27241152\npostmaste 7397 postgres 134u REG 8,2 21676032 12813420 \n/var/lib/pgsql/data/base/26611279/27241153\npostmaste 7397 postgres 135u REG 8,2 23429120 12813421 \n/var/lib/pgsql/data/base/26611279/27241154\npostmaste 7397 postgres 136u REG 8,2 23445504 12813422 \n/var/lib/pgsql/data/base/26611279/27241155\npostmaste 7397 postgres 137u REG 8,2 205619200 12813778 \n/var/lib/pgsql/data/base/26611279/27236971\npostmaste 7397 postgres 138u REG 8,2 65536 12813513 \n/var/lib/pgsql/data/base/26611279/27236747\npostmaste 7397 postgres 139u REG 8,2 170000384 12813779 \n/var/lib/pgsql/data/base/26611279/27236972\npostmaste 7397 postgres 140u REG 8,2 170262528 12813780 \n/var/lib/pgsql/data/base/26611279/27236973\npostmaste 7397 postgres 141u REG 8,2 149225472 12813310 \n/var/lib/pgsql/data/base/26611279/27241048\npostmaste 7397 postgres 142u REG 8,2 65536 12813559 \n/var/lib/pgsql/data/base/26611279/27236793\npostmaste 7397 postgres 143u REG 8,2 149168128 12813312 \n/var/lib/pgsql/data/base/26611279/27241050\npostmaste 7397 postgres 144u REG 8,2 1073741824 32489478 \n/var/lib/pgsql/data/base/26611279/27236833.32\npostmaste 7397 postgres 145u REG 8,2 53567488 32489479 \n/var/lib/pgsql/data/base/26611279/27241844.1\npostmaste 7397 postgres 146u REG 8,2 170016768 12813772 \n/var/lib/pgsql/data/base/26611279/27236965\npostmaste 7397 postgres 147u REG 8,2 1073741824 12813770 \n/var/lib/pgsql/data/base/26611279/27236963\npostmaste 7397 postgres 148u REG 8,2 1073741824 12813228 \n/var/lib/pgsql/data/base/26611279/27236963.1\npostmaste 7397 postgres 149u REG 8,2 1073741824 24133642 \n/var/lib/pgsql/data/base/26611279/27236963.2\npostmaste 7397 postgres 150u REG 8,2 1073741824 32489474 \n/var/lib/pgsql/data/base/26611279/27236963.3\npostmaste 7397 postgres 151u REG 8,2 1073741824 32489489 \n/var/lib/pgsql/data/base/26611279/27236963.4\npostmaste 7397 postgres 152u REG 8,2 573038592 12812863 \n/var/lib/pgsql/data/base/26611279/27236963.5\npostmaste 7397 postgres 153u REG 8,2 207003648 12813776 \n/var/lib/pgsql/data/base/26611279/27236969\npostmaste 7397 postgres 154u REG 8,2 52887552 32489480 \n/var/lib/pgsql/data/base/26611279/27241843.1\npostmaste 7397 postgres 155u REG 8,2 206692352 12813774 \n/var/lib/pgsql/data/base/26611279/27236967\npostmaste 7397 postgres 156u REG 8,2 206381056 12813777 \n/var/lib/pgsql/data/base/26611279/27236970\npostmaste 7397 postgres 157u REG 8,2 170360832 12813773 \n/var/lib/pgsql/data/base/26611279/27236966\npostmaste 7397 postgres 158u REG 8,2 16384 12813686 \n/var/lib/pgsql/data/base/26611279/27236879\npostmaste 7397 postgres 159u REG 8,2 162889728 12813771 \n/var/lib/pgsql/data/base/26611279/27236964\npostmaste 7397 postgres 160u REG 8,2 190480384 12813315 \n/var/lib/pgsql/data/base/26611279/27241053\npostmaste 7397 postgres 161u REG 8,2 169484288 12813775 \n/var/lib/pgsql/data/base/26611279/27236968\npostmaste 7397 postgres 162u REG 8,2 65536 12813512 \n/var/lib/pgsql/data/base/26611279/27236746\npostmaste 7397 postgres 163u REG 8,2 16384 12813728 \n/var/lib/pgsql/data/base/26611279/27236921\npostmaste 7397 postgres 164u REG 8,2 201940992 32489481 \n/var/lib/pgsql/data/base/26611279/27236833.33\npostmaste 7397 postgres 165u REG 8,2 16384 12813692 \n/var/lib/pgsql/data/base/26611279/27236885\npostmaste 7397 postgres 166u REG 8,2 16384 12813693 \n/var/lib/pgsql/data/base/26611279/27236886\n", "msg_date": "Wed, 17 Jun 2009 15:49:39 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very slow selects on a small table" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> Here's the explain and a current strace and lsof. The strace shows even \n> less I/O activity.\n\n> cemdb=# explain select * from ts_stats_transetgroup_user_daily a where \n> a.ts_id in (select b.ts_id from ts_stats_transetgroup_user_daily \n> b,ts_stats_transet_user_interval c, ts_transetgroup_transets_map m where \n> b.ts_transet_group_id = m.ts_transet_group_id and \n> m.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \n> c.ts_user_incarnation_id = b.ts_user_incarnation_id and \n> c.ts_interval_start_time >= '2009-6-16 01:00' and \n> c.ts_interval_start_time < '2009-6-16 02:00');\n\nUm, are you sure that is the query that PID 7397 is running? It doesn't\nmatch your previous pg_stat_activity printout, nor do I see anything\nabout partitioning by PKs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 19:16:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow selects on a small table " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> Um, are you sure that is the query that PID 7397 is running? It doesn't\n> match your previous pg_stat_activity printout, nor do I see anything\n> about partitioning by PKs.\n\nUmm, indeed. I had to construct the query by hand and left out the \npartition part. Here's the full query. Also, I took the liberty of \nreducing the completely expanded column list (shown in part in the \npg_stat_activity printout) in the actual query to \"*\".\n\nThanks,\nBrian\n\ncemdb=# explain select * from ts_stats_transetgroup_user_daily a where \na.ts_id in (select b.ts_id from ts_stats_transetgroup_user_daily \nb,ts_stats_transet_user_interval c, ts_transetgroup_transets_map m where \nb.ts_transet_group_id = m.ts_transet_group_id and \nm.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \nc.ts_user_incarnation_id = b.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2009-6-16 01:00' and \nc.ts_interval_start_time < '2009-6-16 02:00') and a.ts_id > 0 and \na.ts_id < 100000 order by a.ts_id;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=82370.45..128489.59 rows=1 width=779)\n Join Filter: (b.ts_id = a.ts_id)\n -> Index Scan using ts_stats_transetgroup_user_daily_pkey on \nts_stats_transetgroup_user_daily a (cost=0.00..8.22 rows=1 width=779)\n Index Cond: ((ts_id > 0) AND (ts_id < 100000))\n -> Hash Join (cost=82370.45..127026.87 rows=232721 width=8)\n Hash Cond: ((m.ts_transet_group_id = b.ts_transet_group_id) \nAND (c.ts_user_incarnation_id = b.ts_user_incarnation_id))\n -> Hash Join (cost=3.32..27507.92 rows=213176 width=16)\n Hash Cond: (c.ts_transet_incarnation_id = \nm.ts_transet_incarnation_id)\n -> Index Scan using \nts_stats_transet_user_interval_starttime on \nts_stats_transet_user_interval c (cost=0.00..26039.02 rows=213176 width=16)\n Index Cond: ((ts_interval_start_time >= \n'2009-06-16 01:00:00-07'::timestamp with time zone) AND \n(ts_interval_start_time < '2009-06-16 02:00:00-07'::timestamp with time \nzone))\n -> Hash (cost=2.58..2.58 rows=117 width=16)\n -> Seq Scan on ts_transetgroup_transets_map m \n(cost=0.00..2.58 rows=117 width=16)\n -> Hash (cost=80511.26..80511.26 rows=247451 width=24)\n -> Seq Scan on ts_stats_transetgroup_user_daily b \n(cost=0.00..80511.26 rows=247451 width=24)\n(14 rows)\n\n\n", "msg_date": "Wed, 17 Jun 2009 16:25:12 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very slow selects on a small table" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> Tom Lane [[email protected]] wrote:\n>> Um, are you sure that is the query that PID 7397 is running? It doesn't\n>> match your previous pg_stat_activity printout, nor do I see anything\n>> about partitioning by PKs.\n\n> Umm, indeed. I had to construct the query by hand and left out the \n> partition part. Here's the full query. Also, I took the liberty of \n> reducing the completely expanded column list (shown in part in the \n> pg_stat_activity printout) in the actual query to \"*\".\n\nOkay ... I think the problem is right here:\n\n> Nested Loop IN Join (cost=82370.45..128489.59 rows=1 width=779)\n> Join Filter: (b.ts_id = a.ts_id)\n> -> Index Scan using ts_stats_transetgroup_user_daily_pkey on \n> ts_stats_transetgroup_user_daily a (cost=0.00..8.22 rows=1 width=779)\n> Index Cond: ((ts_id > 0) AND (ts_id < 100000))\n> -> Hash Join (cost=82370.45..127026.87 rows=232721 width=8)\n\nIt's choosing this plan shape because it thinks that the indexscan on\nts_stats_transetgroup_user_daily will return only one row, which I bet\nis off by something close to 100000x. The memory usage pulsation\ncorresponds to re-executing the inner hash join, from scratch (including\nrebuilding its hash table) for each outer row. Ouch.\n\nThis seems like kind of a stupid plan anyway (which PG version was this\nexactly?) but certainly the big issue is the catastrophically bad\nrowcount estimate for the indexscan. Do you have ANALYZE stats for\nts_stats_transetgroup_user_daily at all (look in pg_stats)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 19:35:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow selects on a small table " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> This seems like kind of a stupid plan anyway (which PG version was this\n> exactly?) but certainly the big issue is the catastrophically bad\n> rowcount estimate for the indexscan. Do you have ANALYZE stats for\n> ts_stats_transetgroup_user_daily at all (look in pg_stats)?\npostgres 8.3.5. Yes, here's a count(*) from pg_stats:\n\ncemdb=# select count(*) from pg_stats where \ntablename='ts_stats_transetgroup_user_daily';\n count\n-------\n 186\n(1 row)\n\n\nThanks,\nBrian\n", "msg_date": "Wed, 17 Jun 2009 16:42:03 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very slow selects on a small table" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> Tom Lane [[email protected]] wrote:\n>> ... Do you have ANALYZE stats for\n>> ts_stats_transetgroup_user_daily at all (look in pg_stats)?\n\n> postgres 8.3.5. Yes, here's a count(*) from pg_stats:\n> 186\n\nOK, so what's the entry for column ts_id?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 20:07:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow selects on a small table " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> OK, so what's the entry for column ts_id?\n\nIs this what you requested? Brian\n\ncemdb=# select * from pg_stats where \ntablename='ts_stats_transetgroup_user_daily' and attname = 'ts_id';\n schemaname | tablename | attname | null_frac | \navg_width | n_distinct | most_common_vals | most_common_freqs | \n \n histogram_bounds \n \n| correlation\n------------+----------------------------------+---------+-----------+-----------+------------+------------------+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------+-------------\n public | ts_stats_transetgroup_user_daily | ts_id | 0 | \n 8 | -1 | | | \n{600000000000000001,600000000000002537,600000000000005139,600000000000007918,600000000000010330,600000000000013206,600000000000015829,600000000000018440,600000000000021018,600000000000023430,600000000000025887,600000000000028165,600000000000030571,600000000000033290,600000000000036434,600000000000038845,600000000000041276,600000000000043702,600000000000045978,600000000000048385,600000000000050648,600000000000053220,600000000000055602,600000000000058138,600000000000060613,600000000000063114,600000000000065750,600000000000068440,600000000000070859,600000000000073162,600000000000075597,600000000000078199,600000000000081054,600000000000083455,600000000000086049,600000000000088753,600000000000091231,600000000000093942,600000000000096229,600000000000098598,600000000000101190,600000000000103723,600000000000105917,600000000000108273,600000000000110687,600000000000113114,600000000000115528,600000000000118024,600000000000121085,600000000000123876,600000000000126548,600000000000128749,6\n00000000000131260,600000000000133668,600000000000135988,600000000000138755,600000000000141251,600000000000143855,600000000000146302,600000000000148963,600000000000151424,600000000000153772,600000000000156222,600000000000159005,600000000000161293,600000000000163783,600000000000166624,600000000000168913,600000000000171220,600000000000173349,600000000000175584,600000000000177882,600000000000180605,600000000000183207,600000000000185420,600000000000187949,600000000000190128,600000000000192738,600000000000195452,600000000000197843,600000000000200173,600000000000202838,600000000000205245,600000000000207579,600000000000210566,600000000000212935,600000000000215382,600000000000218095,600000000000220940,600000000000223634,600000000000226196,600000000000228596,600000000000230733,600000000000232988,600000000000235066,600000000000237064,600000000000239736,600000000000242470,600000000000244915,600000000000247102,600000000000250068} \n| 0.94954\n(1 row)\n", "msg_date": "Wed, 17 Jun 2009 17:11:30 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very slow selects on a small table" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> Tom Lane [[email protected]] wrote:\n>> OK, so what's the entry for column ts_id?\n\n> Is this what you requested? Brian\n\nYup. So according to those stats, all ts_id values fall in the range\n600000000000000001 .. 600000000000250068. It's no wonder it's not\nexpecting to find anything between 0 and 100000. I think maybe you\nforgot to re-analyze after loading data ... although this being 8.3,\nI'd have expected autovacuum to update the stats at some point ...\n\nRecommendation: re-ANALYZE, check that the plan changes to something\nwith a higher estimate for the number of rows for this table, and then\nabort and restart those processes. Lord knows how long you'll be\nwaiting for them to finish with their current plans :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Jun 2009 20:17:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow selects on a small table " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> Yup. So according to those stats, all ts_id values fall in the range\n> 600000000000000001 .. 600000000000250068. It's no wonder it's not\n> expecting to find anything between 0 and 100000. I think maybe you\n> forgot to re-analyze after loading data ... although this being 8.3,\n> I'd have expected autovacuum to update the stats at some point ...\nyes, this is a concern. I may have to do the vacuum analyze myself or \nlearn how to make autovacuum run more frequently.\n> \n> Recommendation: re-ANALYZE, check that the plan changes to something\n> with a higher estimate for the number of rows for this table, and then\n> abort and restart those processes. Lord knows how long you'll be\n> waiting for them to finish with their current plans :-(\nthese queries are still running now 27.5 hours later... These queries \nare generated by some java code and in putting it into a test program so \nI could capture the queries, I failed to get the id range correct -- \nsorry for wasting your time with bogus data. Below is the EXPLAIN output \nfrom the 4 correct queries. I can't tell which one is being executed by \nPID 7397, but the query plans, except the last, do look very similar. In \nany event, as I mentioned, all 4 are still running.\n\nThanks,\nBrian\n\ncemdb=# explain select * from ts_stats_transetgroup_user_daily a where \na.ts_id in (select b.ts_id from ts_stats_transetgroup_user_daily \nb,ts_stats_transet_user_interval c, ts_transetgroup_transets_map m where \nb.ts_transet_group_id = m.ts_transet_group_id and \nm.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \nc.ts_user_incarnation_id = b.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2009-6-16 01:00' and \nc.ts_interval_start_time < '2009-6-16 02:00') and a.ts_id > \n600000000000010000 and a.ts_id < 600000000000020000 order by a.ts_id;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=138722.75..138734.37 rows=9299 width=779)\n Sort Key: a.ts_id\n -> Hash IN Join (cost=131710.94..138416.28 rows=9299 width=779)\n Hash Cond: (a.ts_id = b.ts_id)\n -> Index Scan using ts_stats_transetgroup_user_daily_pkey on \nts_stats_transetgroup_user_daily a (cost=0.00..6602.21 rows=9299 width=779)\n Index Cond: ((ts_id > 600000000000010000::bigint) AND \n(ts_id < 600000000000020000::bigint))\n -> Hash (cost=130113.34..130113.34 rows=255616 width=8)\n -> Hash Join (cost=82370.45..130113.34 rows=255616 \nwidth=8)\n Hash Cond: ((m.ts_transet_group_id = \nb.ts_transet_group_id) AND (c.ts_user_incarnation_id = \nb.ts_user_incarnation_id))\n -> Hash Join (cost=3.32..29255.47 rows=229502 \nwidth=16)\n Hash Cond: (c.ts_transet_incarnation_id = \nm.ts_transet_incarnation_id)\n -> Index Scan using \nts_stats_transet_user_interval_starttime on \nts_stats_transet_user_interval c (cost=0.00..27674.33 rows=229502 width=16)\n Index Cond: ((ts_interval_start_time \n >= '2009-06-16 01:00:00-07'::timestamp with time zone) AND \n(ts_interval_start_time < '2009-06-16 02:00:00-07'::timestamp with time \nzone))\n -> Hash (cost=2.58..2.58 rows=117 width=16)\n -> Seq Scan on \nts_transetgroup_transets_map m (cost=0.00..2.58 rows=117 width=16)\n -> Hash (cost=80511.26..80511.26 rows=247451 \nwidth=24)\n -> Seq Scan on \nts_stats_transetgroup_user_daily b (cost=0.00..80511.26 rows=247451 \nwidth=24)\n(17 rows)\n\ncemdb=# explain select * from ts_stats_transetgroup_user_daily a where \na.ts_id in (select b.ts_id from ts_stats_transetgroup_user_daily \nb,ts_stats_transet_user_interval c, ts_transetgroup_transets_map m where \nb.ts_transet_group_id = m.ts_transet_group_id and \nm.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \nc.ts_user_incarnation_id = b.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2009-6-16 01:00' and \nc.ts_interval_start_time < '2009-6-16 02:00') and a.ts_id > \n600000000000020000 and a.ts_id < 600000000000030000 order by a.ts_id;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=139430.64..139443.43 rows=10237 width=779)\n Sort Key: a.ts_id\n -> Hash IN Join (cost=131710.94..139089.71 rows=10237 width=779)\n Hash Cond: (a.ts_id = b.ts_id)\n -> Index Scan using ts_stats_transetgroup_user_daily_pkey on \nts_stats_transetgroup_user_daily a (cost=0.00..7265.23 rows=10237 \nwidth=779)\n Index Cond: ((ts_id > 600000000000020000::bigint) AND \n(ts_id < 600000000000030000::bigint))\n -> Hash (cost=130113.34..130113.34 rows=255616 width=8)\n -> Hash Join (cost=82370.45..130113.34 rows=255616 \nwidth=8)\n Hash Cond: ((m.ts_transet_group_id = \nb.ts_transet_group_id) AND (c.ts_user_incarnation_id = \nb.ts_user_incarnation_id))\n -> Hash Join (cost=3.32..29255.47 rows=229502 \nwidth=16)\n Hash Cond: (c.ts_transet_incarnation_id = \nm.ts_transet_incarnation_id)\n -> Index Scan using \nts_stats_transet_user_interval_starttime on \nts_stats_transet_user_interval c (cost=0.00..27674.33 rows=229502 width=16)\n Index Cond: ((ts_interval_start_time \n >= '2009-06-16 01:00:00-07'::timestamp with time zone) AND \n(ts_interval_start_time < '2009-06-16 02:00:00-07'::timestamp with time \nzone))\n -> Hash (cost=2.58..2.58 rows=117 width=16)\n -> Seq Scan on \nts_transetgroup_transets_map m (cost=0.00..2.58 rows=117 width=16)\n -> Hash (cost=80511.26..80511.26 rows=247451 \nwidth=24)\n -> Seq Scan on \nts_stats_transetgroup_user_daily b (cost=0.00..80511.26 rows=247451 \nwidth=24)\n(17 rows)\n\ncemdb=# explain select * from ts_stats_transetgroup_user_daily a where \na.ts_id in (select b.ts_id from ts_stats_transetgroup_user_daily \nb,ts_stats_transet_user_interval c, ts_transetgroup_transets_map m where \nb.ts_transet_group_id = m.ts_transet_group_id and \nm.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \nc.ts_user_incarnation_id = b.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2009-6-16 01:00' and \nc.ts_interval_start_time < '2009-6-16 02:00') and a.ts_id > \n600000000000030000 and a.ts_id < 600000000000040000 order by a.ts_id;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=138685.25..138696.81 rows=9247 width=779)\n Sort Key: a.ts_id\n -> Hash IN Join (cost=131710.94..138380.68 rows=9247 width=779)\n Hash Cond: (a.ts_id = b.ts_id)\n -> Index Scan using ts_stats_transetgroup_user_daily_pkey on \nts_stats_transetgroup_user_daily a (cost=0.00..6567.19 rows=9247 width=779)\n Index Cond: ((ts_id > 600000000000030000::bigint) AND \n(ts_id < 600000000000040000::bigint))\n -> Hash (cost=130113.34..130113.34 rows=255616 width=8)\n -> Hash Join (cost=82370.45..130113.34 rows=255616 \nwidth=8)\n Hash Cond: ((m.ts_transet_group_id = \nb.ts_transet_group_id) AND (c.ts_user_incarnation_id = \nb.ts_user_incarnation_id))\n -> Hash Join (cost=3.32..29255.47 rows=229502 \nwidth=16)\n Hash Cond: (c.ts_transet_incarnation_id = \nm.ts_transet_incarnation_id)\n -> Index Scan using \nts_stats_transet_user_interval_starttime on \nts_stats_transet_user_interval c (cost=0.00..27674.33 rows=229502 width=16)\n Index Cond: ((ts_interval_start_time \n >= '2009-06-16 01:00:00-07'::timestamp with time zone) AND \n(ts_interval_start_time < '2009-06-16 02:00:00-07'::timestamp with time \nzone))\n -> Hash (cost=2.58..2.58 rows=117 width=16)\n -> Seq Scan on \nts_transetgroup_transets_map m (cost=0.00..2.58 rows=117 width=16)\n -> Hash (cost=80511.26..80511.26 rows=247451 \nwidth=24)\n -> Seq Scan on \nts_stats_transetgroup_user_daily b (cost=0.00..80511.26 rows=247451 \nwidth=24)\n(17 rows)\n\ncemdb=# explain select * from ts_stats_transetgroup_user_daily a where \na.ts_id in (select b.ts_id from ts_stats_transetgroup_user_daily \nb,ts_stats_transet_user_interval c, ts_transetgroup_transets_map m where \nb.ts_transet_group_id = m.ts_transet_group_id and \nm.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \nc.ts_user_incarnation_id = b.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2009-6-16 01:00' and \nc.ts_interval_start_time < '2009-6-16 02:00') and a.ts_id > \n600000000000040000 and a.ts_id < 9223372036854775807 order by a.ts_id;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge IN Join (cost=141592.81..290873.68 rows=209136 width=779)\n Merge Cond: (a.ts_id = b.ts_id)\n -> Index Scan using ts_stats_transetgroup_user_daily_pkey on \nts_stats_transetgroup_user_daily a (cost=0.00..147334.73 rows=209136 \nwidth=779)\n Index Cond: ((ts_id > 600000000000040000::bigint) AND (ts_id < \n9223372036854775807::bigint))\n -> Sort (cost=141592.81..141912.33 rows=255616 width=8)\n Sort Key: b.ts_id\n -> Hash Join (cost=82370.45..130113.34 rows=255616 width=8)\n Hash Cond: ((m.ts_transet_group_id = \nb.ts_transet_group_id) AND (c.ts_user_incarnation_id = \nb.ts_user_incarnation_id))\n -> Hash Join (cost=3.32..29255.47 rows=229502 width=16)\n Hash Cond: (c.ts_transet_incarnation_id = \nm.ts_transet_incarnation_id)\n -> Index Scan using \nts_stats_transet_user_interval_starttime on \nts_stats_transet_user_interval c (cost=0.00..27674.33 rows=229502 width=16)\n Index Cond: ((ts_interval_start_time >= \n'2009-06-16 01:00:00-07'::timestamp with time zone) AND \n(ts_interval_start_time < '2009-06-16 02:00:00-07'::timestamp with time \nzone))\n -> Hash (cost=2.58..2.58 rows=117 width=16)\n -> Seq Scan on ts_transetgroup_transets_map \nm (cost=0.00..2.58 rows=117 width=16)\n -> Hash (cost=80511.26..80511.26 rows=247451 width=24)\n -> Seq Scan on ts_stats_transetgroup_user_daily b \n (cost=0.00..80511.26 rows=247451 width=24)\n(16 rows)\n\n\n", "msg_date": "Thu, 18 Jun 2009 10:06:35 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very slow selects on a small table" }, { "msg_contents": "On Thu, Jun 18, 2009 at 6:06 PM, Brian Cox<[email protected]> wrote:\n\n> these queries are still running now 27.5 hours later... These queries are\n> generated by some java code and in putting it into a test program so I could\n> capture the queries, I failed to get the id range correct -- sorry for\n> wasting your time with bogus data. Below is the EXPLAIN output from the 4\n> correct queries. I can't tell which one is being executed by PID 7397, but\n> the query plans, except the last, do look very similar. In any event, as I\n> mentioned, all 4 are still running.\nthis might be quite bogus question, just a hit - but what is your\nwork_mem set to ?\nGuys, isn't postgresql giving hudge cost, when it can't sort in memory ?\n\n-- \nGJ\n", "msg_date": "Thu, 18 Jun 2009 18:12:32 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow selects on a small table" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> these queries are still running now 27.5 hours later... These queries \n> are generated by some java code and in putting it into a test program so \n> I could capture the queries, I failed to get the id range correct -- \n> sorry for wasting your time with bogus data. Below is the EXPLAIN output \n> from the 4 correct queries. I can't tell which one is being executed by \n> PID 7397, but the query plans, except the last, do look very similar. In \n> any event, as I mentioned, all 4 are still running.\n\nStrange as can be. Can you do an EXPLAIN ANALYZE on just the IN's\nsub-select and confirm that the runtime of that is reasonable? I'd\nbe interested to know how many rows it really returns, too.\n\nOne thing that strikes me is that the cost estimates seem a bit on the\nlow side for the rowcounts. Are you using nondefault planner cost\nparameters, and if so what?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jun 2009 13:48:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow selects on a small table " } ]
[ { "msg_contents": "Grzegorz Jakiewicz [[email protected]] wrote:\n> this might be quite bogus question, just a hit - but what is your\n> work_mem set to ?\n> Guys, isn't postgresql giving hudge cost, when it can't sort in memory ?\nwork_mem = 64MB\n", "msg_date": "Thu, 18 Jun 2009 10:16:51 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very slow selects on a small table" }, { "msg_contents": "On Thu, Jun 18, 2009 at 6:16 PM, Brian Cox <[email protected]> wrote:\n\n> Grzegorz Jakiewicz [[email protected]] wrote:\n>\n>> this might be quite bogus question, just a hit - but what is your\n>> work_mem set to ?\n>> Guys, isn't postgresql giving hudge cost, when it can't sort in memory ?\n>>\n> work_mem = 64MB\n>\ntry increasing it please, to say 256MB\n\n8.4 in explain analyze actually informs you whether sorting was done on disc\nor in memory, but you probably don't run stuff on cutting edge ;)\n\n\n\n-- \nGJ\n\nOn Thu, Jun 18, 2009 at 6:16 PM, Brian Cox <[email protected]> wrote:\nGrzegorz Jakiewicz [[email protected]] wrote:\n\nthis might be quite bogus question, just a hit - but what is your\nwork_mem set to ?\nGuys, isn't postgresql giving hudge cost, when it can't sort in memory ?\n\nwork_mem = 64MB\ntry increasing it please, to say 256MB8.4 in explain analyze actually informs you whether sorting was done on disc or in memory, but you probably don't run stuff on cutting edge ;)\n-- GJ", "msg_date": "Thu, 18 Jun 2009 18:21:03 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow selects on a small table" } ]
[ { "msg_contents": "Hi All,\n\nWe are having a reasonably powerful machine for supporting about 20\ndatabases but in total they're not more then 4GB in size.\n\nThe machine is 2 processor 8 core and 8 Gig or ram so I would expect that PG\nshould cache the whole db into memory. Well actually it doesn't.\n\nWhat is more strange that a query that under zero load is running under\n100ms during high load times it can take up to 15 seconds !!\nWhat on earth can make such difference ?\n\nhere are the key config options that I set up :\n# - Memory -\n\nshared_buffers = 170000 # min 16 or\nmax_connections*2, 8KB each\ntemp_buffers = 21000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 1048576 # min 64, size in KB\nmaintenance_work_mem = 1048576 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 524298 # min max_fsm_relations*16, 6 bytes\neach\nmax_fsm_relations = 32768 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 4000 # min 25\n#preload_libraries = ''\n\nany ideas ?\n\ncheers,\nPeter\n\nHi All,We are having a reasonably powerful machine for supporting about 20 databases but in total they're not more then 4GB in size. The machine is 2 processor 8 core and 8 Gig or ram so I would expect that PG should cache the whole db into memory. Well actually it doesn't. \nWhat is more strange that a query that under zero load is running under 100ms during high load times it can take up to 15 seconds !!What on earth can make such difference ?here are the key config options that I set up :\n# - Memory -shared_buffers = 170000                         # min 16 or max_connections*2, 8KB eachtemp_buffers = 21000                    # min 100, 8KB each#max_prepared_transactions = 5          # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared memory# per transaction slot, plus lock space (see max_locks_per_transaction).work_mem = 1048576                      # min 64, size in KB\nmaintenance_work_mem = 1048576          # min 1024, size in KB#max_stack_depth = 2048                 # min 100, size in KB# - Free Space Map -max_fsm_pages = 524298                  # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 32768               # min 100, ~70 bytes each# - Kernel Resource Usage -max_files_per_process = 4000            # min 25#preload_libraries = ''any ideas ? cheers,\nPeter", "msg_date": "Thu, 18 Jun 2009 20:27:02 +0200", "msg_from": "Peter Alban <[email protected]>", "msg_from_op": true, "msg_subject": "Strange performance response for high load times" }, { "msg_contents": "On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:\n> Hi All,\n> \n> We are having a reasonably powerful machine for supporting about 20\n> databases but in total they're not more then 4GB in size.\n> \n> The machine is 2 processor 8 core and 8 Gig or ram so I would expect that PG\n> should cache the whole db into memory. Well actually it doesn't.\n> \n> What is more strange that a query that under zero load is running under\n> 100ms during high load times it can take up to 15 seconds !!\n> What on earth can make such difference ?\n> \n> here are the key config options that I set up :\n> # - Memory -\n> \n> shared_buffers = 170000 # min 16 or\n> max_connections*2, 8KB each\n> temp_buffers = 21000 # min 100, 8KB each\n> #max_prepared_transactions = 5 # can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 1048576 # min 64, size in KB\n> maintenance_work_mem = 1048576 # min 1024, size in KB\n\n1GB of work_mem is very high if you have more than a couple of\nqueries that use it.\n\nKen\n\n> #max_stack_depth = 2048 # min 100, size in KB\n> \n> # - Free Space Map -\n> \n> max_fsm_pages = 524298 # min max_fsm_relations*16, 6 bytes\n> each\n> max_fsm_relations = 32768 # min 100, ~70 bytes each\n> \n> # - Kernel Resource Usage -\n> \n> max_files_per_process = 4000 # min 25\n> #preload_libraries = ''\n> \n> any ideas ?\n> \n> cheers,\n> Peter\n", "msg_date": "Thu, 18 Jun 2009 13:30:52 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance response for high load times" }, { "msg_contents": "So Ken ,\n\nWhat do you reckon it should be ? What is the rule of thumb here ?\n\ncheers,\nPeter\n\nOn Thu, Jun 18, 2009 at 8:30 PM, Kenneth Marshall <[email protected]> wrote:\n\n> On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:\n> > Hi All,\n> >\n> > We are having a reasonably powerful machine for supporting about 20\n> > databases but in total they're not more then 4GB in size.\n> >\n> > The machine is 2 processor 8 core and 8 Gig or ram so I would expect that\n> PG\n> > should cache the whole db into memory. Well actually it doesn't.\n> >\n> > What is more strange that a query that under zero load is running under\n> > 100ms during high load times it can take up to 15 seconds !!\n> > What on earth can make such difference ?\n> >\n> > here are the key config options that I set up :\n> > # - Memory -\n> >\n> > shared_buffers = 170000 # min 16 or\n> > max_connections*2, 8KB each\n> > temp_buffers = 21000 # min 100, 8KB each\n> > #max_prepared_transactions = 5 # can be 0 or more\n> > # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> > memory\n> > # per transaction slot, plus lock space (see max_locks_per_transaction).\n> > work_mem = 1048576 # min 64, size in KB\n> > maintenance_work_mem = 1048576 # min 1024, size in KB\n>\n> 1GB of work_mem is very high if you have more than a couple of\n> queries that use it.\n>\n> Ken\n>\n> > #max_stack_depth = 2048 # min 100, size in KB\n> >\n> > # - Free Space Map -\n> >\n> > max_fsm_pages = 524298 # min max_fsm_relations*16, 6\n> bytes\n> > each\n> > max_fsm_relations = 32768 # min 100, ~70 bytes each\n> >\n> > # - Kernel Resource Usage -\n> >\n> > max_files_per_process = 4000 # min 25\n> > #preload_libraries = ''\n> >\n> > any ideas ?\n> >\n> > cheers,\n> > Peter\n>\n\nSo Ken , What do you reckon it should be ? What is the rule of thumb here ?cheers,PeterOn Thu, Jun 18, 2009 at 8:30 PM, Kenneth Marshall <[email protected]> wrote:\nOn Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:\n> Hi All,\n>\n> We are having a reasonably powerful machine for supporting about 20\n> databases but in total they're not more then 4GB in size.\n>\n> The machine is 2 processor 8 core and 8 Gig or ram so I would expect that PG\n> should cache the whole db into memory. Well actually it doesn't.\n>\n> What is more strange that a query that under zero load is running under\n> 100ms during high load times it can take up to 15 seconds !!\n> What on earth can make such difference ?\n>\n> here are the key config options that I set up :\n> # - Memory -\n>\n> shared_buffers = 170000                         # min 16 or\n> max_connections*2, 8KB each\n> temp_buffers = 21000                    # min 100, 8KB each\n> #max_prepared_transactions = 5          # can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 1048576                      # min 64, size in KB\n> maintenance_work_mem = 1048576          # min 1024, size in KB\n\n1GB of work_mem is very high if you have more than a couple of\nqueries that use it.\n\nKen\n\n> #max_stack_depth = 2048                 # min 100, size in KB\n>\n> # - Free Space Map -\n>\n> max_fsm_pages = 524298                  # min max_fsm_relations*16, 6 bytes\n> each\n> max_fsm_relations = 32768               # min 100, ~70 bytes each\n>\n> # - Kernel Resource Usage -\n>\n> max_files_per_process = 4000            # min 25\n> #preload_libraries = ''\n>\n> any ideas ?\n>\n> cheers,\n> Peter", "msg_date": "Thu, 18 Jun 2009 21:42:47 +0200", "msg_from": "Peter Alban <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange performance response for high load times" }, { "msg_contents": "On Thu, Jun 18, 2009 at 09:42:47PM +0200, Peter Alban wrote:\n> So Ken ,\n> \n> What do you reckon it should be ? What is the rule of thumb here ?\n> \n> cheers,\n> Peter\n> \n\nIt really depends on your query mix. The key to remember is that\nmultiples (possibly many) of the work_mem value can be allocated\nin an individual query. You can set it on a per query basis to \nhelp manage it use, i.e. up it for only the query that needs it.\nWith our systems, which run smaller number of queries we do use\n256MB. I hope that this helps.\n\nRegards,\nKen\n> On Thu, Jun 18, 2009 at 8:30 PM, Kenneth Marshall <[email protected]> wrote:\n> \n> > On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:\n> > > Hi All,\n> > >\n> > > We are having a reasonably powerful machine for supporting about 20\n> > > databases but in total they're not more then 4GB in size.\n> > >\n> > > The machine is 2 processor 8 core and 8 Gig or ram so I would expect that\n> > PG\n> > > should cache the whole db into memory. Well actually it doesn't.\n> > >\n> > > What is more strange that a query that under zero load is running under\n> > > 100ms during high load times it can take up to 15 seconds !!\n> > > What on earth can make such difference ?\n> > >\n> > > here are the key config options that I set up :\n> > > # - Memory -\n> > >\n> > > shared_buffers = 170000 # min 16 or\n> > > max_connections*2, 8KB each\n> > > temp_buffers = 21000 # min 100, 8KB each\n> > > #max_prepared_transactions = 5 # can be 0 or more\n> > > # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> > > memory\n> > > # per transaction slot, plus lock space (see max_locks_per_transaction).\n> > > work_mem = 1048576 # min 64, size in KB\n> > > maintenance_work_mem = 1048576 # min 1024, size in KB\n> >\n> > 1GB of work_mem is very high if you have more than a couple of\n> > queries that use it.\n> >\n> > Ken\n> >\n> > > #max_stack_depth = 2048 # min 100, size in KB\n> > >\n> > > # - Free Space Map -\n> > >\n> > > max_fsm_pages = 524298 # min max_fsm_relations*16, 6\n> > bytes\n> > > each\n> > > max_fsm_relations = 32768 # min 100, ~70 bytes each\n> > >\n> > > # - Kernel Resource Usage -\n> > >\n> > > max_files_per_process = 4000 # min 25\n> > > #preload_libraries = ''\n> > >\n> > > any ideas ?\n> > >\n> > > cheers,\n> > > Peter\n> >\n", "msg_date": "Thu, 18 Jun 2009 15:01:02 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance response for high load times" }, { "msg_contents": "What's still badgering me , is the performance when, there is no load or\nsignificantly lower than peek times ?\n\nWhy is there such a big difference ?\n\ni.e. off peek times a simple select with where (on indexed column) and limit\ntaks* 40 ms* during peek times it took *2 seconds* - 50 times slower !\n\ncheers,\nPeter\n\n\nOn Thu, Jun 18, 2009 at 10:01 PM, Kenneth Marshall <[email protected]> wrote:\n\n> On Thu, Jun 18, 2009 at 09:42:47PM +0200, Peter Alban wrote:\n> > So Ken ,\n> >\n> > What do you reckon it should be ? What is the rule of thumb here ?\n> >\n> > cheers,\n> > Peter\n> >\n>\n> It really depends on your query mix. The key to remember is that\n> multiples (possibly many) of the work_mem value can be allocated\n> in an individual query. You can set it on a per query basis to\n> help manage it use, i.e. up it for only the query that needs it.\n> With our systems, which run smaller number of queries we do use\n> 256MB. I hope that this helps.\n>\n> Regards,\n> Ken\n> > On Thu, Jun 18, 2009 at 8:30 PM, Kenneth Marshall <[email protected]> wrote:\n> >\n> > > On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:\n> > > > Hi All,\n> > > >\n> > > > We are having a reasonably powerful machine for supporting about 20\n> > > > databases but in total they're not more then 4GB in size.\n> > > >\n> > > > The machine is 2 processor 8 core and 8 Gig or ram so I would expect\n> that\n> > > PG\n> > > > should cache the whole db into memory. Well actually it doesn't.\n> > > >\n> > > > What is more strange that a query that under zero load is running\n> under\n> > > > 100ms during high load times it can take up to 15 seconds !!\n> > > > What on earth can make such difference ?\n> > > >\n> > > > here are the key config options that I set up :\n> > > > # - Memory -\n> > > >\n> > > > shared_buffers = 170000 # min 16 or\n> > > > max_connections*2, 8KB each\n> > > > temp_buffers = 21000 # min 100, 8KB each\n> > > > #max_prepared_transactions = 5 # can be 0 or more\n> > > > # note: increasing max_prepared_transactions costs ~600 bytes of\n> shared\n> > > > memory\n> > > > # per transaction slot, plus lock space (see\n> max_locks_per_transaction).\n> > > > work_mem = 1048576 # min 64, size in KB\n> > > > maintenance_work_mem = 1048576 # min 1024, size in KB\n> > >\n> > > 1GB of work_mem is very high if you have more than a couple of\n> > > queries that use it.\n> > >\n> > > Ken\n> > >\n> > > > #max_stack_depth = 2048 # min 100, size in KB\n> > > >\n> > > > # - Free Space Map -\n> > > >\n> > > > max_fsm_pages = 524298 # min max_fsm_relations*16, 6\n> > > bytes\n> > > > each\n> > > > max_fsm_relations = 32768 # min 100, ~70 bytes each\n> > > >\n> > > > # - Kernel Resource Usage -\n> > > >\n> > > > max_files_per_process = 4000 # min 25\n> > > > #preload_libraries = ''\n> > > >\n> > > > any ideas ?\n> > > >\n> > > > cheers,\n> > > > Peter\n> > >\n>\n\nWhat's  still badgering me , is the performance when, there is no load or significantly lower than peek times ? Why is there such a big difference ? i.e. off peek times a simple select with where (on indexed column) and limit taks 40 ms during peek times it took 2 seconds  - 50 times slower ! \ncheers,PeterOn Thu, Jun 18, 2009 at 10:01 PM, Kenneth Marshall <[email protected]> wrote:\nOn Thu, Jun 18, 2009 at 09:42:47PM +0200, Peter Alban wrote:\n> So Ken ,\n>\n> What do you reckon it should be ? What is the rule of thumb here ?\n>\n> cheers,\n> Peter\n>\n\nIt really depends on your query mix. The key to remember is that\nmultiples (possibly many) of the work_mem value can be allocated\nin an individual query. You can set it on a per query basis to\nhelp manage it use, i.e. up it for only the query that needs it.\nWith our systems, which run smaller number of queries we do use\n256MB. I hope that this helps.\n\nRegards,\nKen\n> On Thu, Jun 18, 2009 at 8:30 PM, Kenneth Marshall <[email protected]> wrote:\n>\n> > On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:\n> > > Hi All,\n> > >\n> > > We are having a reasonably powerful machine for supporting about 20\n> > > databases but in total they're not more then 4GB in size.\n> > >\n> > > The machine is 2 processor 8 core and 8 Gig or ram so I would expect that\n> > PG\n> > > should cache the whole db into memory. Well actually it doesn't.\n> > >\n> > > What is more strange that a query that under zero load is running under\n> > > 100ms during high load times it can take up to 15 seconds !!\n> > > What on earth can make such difference ?\n> > >\n> > > here are the key config options that I set up :\n> > > # - Memory -\n> > >\n> > > shared_buffers = 170000                         # min 16 or\n> > > max_connections*2, 8KB each\n> > > temp_buffers = 21000                    # min 100, 8KB each\n> > > #max_prepared_transactions = 5          # can be 0 or more\n> > > # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> > > memory\n> > > # per transaction slot, plus lock space (see max_locks_per_transaction).\n> > > work_mem = 1048576                      # min 64, size in KB\n> > > maintenance_work_mem = 1048576          # min 1024, size in KB\n> >\n> > 1GB of work_mem is very high if you have more than a couple of\n> > queries that use it.\n> >\n> > Ken\n> >\n> > > #max_stack_depth = 2048                 # min 100, size in KB\n> > >\n> > > # - Free Space Map -\n> > >\n> > > max_fsm_pages = 524298                  # min max_fsm_relations*16, 6\n> > bytes\n> > > each\n> > > max_fsm_relations = 32768               # min 100, ~70 bytes each\n> > >\n> > > # - Kernel Resource Usage -\n> > >\n> > > max_files_per_process = 4000            # min 25\n> > > #preload_libraries = ''\n> > >\n> > > any ideas ?\n> > >\n> > > cheers,\n> > > Peter\n> >", "msg_date": "Thu, 18 Jun 2009 22:49:55 +0200", "msg_from": "Peter Alban <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange performance response for high load times" }, { "msg_contents": "Peter Alban <[email protected]> wrote: \n \n> Why is there such a big difference ?\n> \n> i.e. off peek times a simple select with where (on indexed column)\n> and limit taks* 40 ms* during peek times it took *2 seconds* - 50\n> times slower !\n \nIf your high work_mem setting you may have been causing the OS to\ndiscard cached data, causing disk reads where you normally get cache\nhits, or even triggered swapping. Either of those can easily cause a\ndifference of that magnitude, or more.\n \n-Kevin\n", "msg_date": "Thu, 18 Jun 2009 16:01:17 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance response for high load times" } ]
[ { "msg_contents": "ts_stats_transet_user_interval has ~48M rows. ts_id is the PK and there \nis an index on ts_interval_start_time. I reindexed it and ran vacuum \nanalyze. Only SELECTs have been done since these operations.\n\ncemdb=# explain select min(ts_id) from ts_stats_transet_user_interval a \nwhere 0=0 and a.ts_interval_start_time >= '2009-6-16 01:00' and \na.ts_interval_start_time < '2009-6-16 02:00';\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=12.19..12.20 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..12.19 rows=1 width=8)\n -> Index Scan using ts_stats_transet_user_interval_pkey on \nts_stats_transet_user_interval a (cost=0.00..5496152.30 rows=450799 \nwidth=8)\n Filter: ((ts_id IS NOT NULL) AND \n(ts_interval_start_time >= '2009-06-16 01:00:00-07'::timestamp with time \nzone) AND (ts_interval_start_time < '2009-06-16 02:00:00-07'::timestamp \nwith time zone))\n(5 rows)\ncemdb=# explain select max(ts_id) from ts_stats_transet_user_interval a \nwhere 0=0 and a.ts_interval_start_time >= '2009-6-16 01:00' and \na.ts_interval_start_time < '2009-6-16 02:00';\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=12.19..12.20 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..12.19 rows=1 width=8)\n -> Index Scan Backward using \nts_stats_transet_user_interval_pkey on ts_stats_transet_user_interval a \n (cost=0.00..5496152.30 rows=450799 width=8)\n Filter: ((ts_id IS NOT NULL) AND \n(ts_interval_start_time >= '2009-06-16 01:00:00-07'::timestamp with time \nzone) AND (ts_interval_start_time < '2009-06-16 02:00:00-07'::timestamp \nwith time zone))\n(5 rows)\n[root@rdl64xeoserv01 log]# time PGPASSWORD=quality psql -U admin -d \ncemdb -c \"select min(ts_id) from ts_stats_transet_user_interval a where \na.ts_interval_start_time >= '2009-6-16 01:00' and \na.ts_interval_start_time < '2009-6-16 02:00'\" min\n--------------------\n 600000000032100000\n(1 row)\n\n\nreal 1m32.025s\nuser 0m0.000s\nsys 0m0.003s\n[root@rdl64xeoserv01 log]# time PGPASSWORD=quality psql -U admin -d \ncemdb -c \"select max(ts_id) from ts_stats_transet_user_interval a where \na.ts_interval_start_time >= '2009-6-16 01:00' and \na.ts_interval_start_time < '2009-6-16 02:00'\"\n max\n--------------------\n 600000000032399999\n(1 row)\n\n\nreal 16m39.412s\nuser 0m0.002s\nsys 0m0.002s\n\n\nseems like max() shouldn't take any longer than min() and certainly not \n10 times as long. Any ideas on how to determine the max more quickly?\n\nThanks,\nBrian\n", "msg_date": "Thu, 18 Jun 2009 16:34:39 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "select max() much slower than select min()" }, { "msg_contents": "Brian Cox <[email protected]> wrote: \n \n> cemdb=# explain select min(ts_id) from\n> ts_stats_transet_user_interval a \n> where 0=0 and a.ts_interval_start_time >= '2009-6-16 01:00' and \n> a.ts_interval_start_time < '2009-6-16 02:00';\n \n> seems like max() shouldn't take any longer than min() and certainly\n> not 10 times as long. Any ideas on how to determine the max more\n> quickly?\n \nIs there any correlation between ts_id and ts_interval_start_time? \nPerhaps if you tried min and max with different time ranges it would\nfind a row on a backward scan faster. It'll take ten times as long if\nit has to scan through ten times as many rows to find a match.\n \nI don't suppose you have an index on ts_interval_start_time?\nIf not, what happens if you run these queries after adding one?\n \n-Kevin\n", "msg_date": "Thu, 18 Jun 2009 19:15:43 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max() much slower than select min()" }, { "msg_contents": "> -----Original Message-----\n> From: Brian Cox\n> Subject: [PERFORM] select max() much slower than select min()\n> \n\n> seems like max() shouldn't take any longer than min() and \n> certainly not 10 times as long. Any ideas on how to determine \n> the max more quickly?\n\n\nThat is odd. It seems like max should actually have to scan fewer rows than\nmin should. It might still be bloat in the table, because unless you did\nVACUUM FULL there could still be dead rows. A vacuum verbose would show if\nthere is bloat or not. Also maybe you could try a two column index like\nthis:\n\ncreate index test_index on ts_stats_transet_user_interval\n(ts_interval_start_time, ts_id);\n\n\nDave\n\n\n", "msg_date": "Fri, 19 Jun 2009 09:14:37 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max() much slower than select min()" } ]
[ { "msg_contents": "Kevin Grittner [[email protected]] wrote:\n> Is there any correlation between ts_id and ts_interval_start_time?\nonly vaguely: increasing ts_interval_start_time implies increasing ts_id \nbut there may be many rows (100,000's) with the same ts_interval_start_time\n\n> Perhaps if you tried min and max with different time ranges it would\n> find a row on a backward scan faster. It'll take ten times as long if\n> it has to scan through ten times as many rows to find a match.\nit looks like there are fewer rows backwards than forwards:\n\ncemdb=> select count(*) from ts_stats_transet_user_interval where \nts_interval_start_time < '2009-6-16 01:00';\n count\n----------\n 32100000\n(1 row)\n\ncemdb=> select count(*) from ts_stats_transet_user_interval where \nts_interval_start_time >= '2009-6-16 02:00';\n count\n----------\n 13500000\n(1 row)\n\n\n> I don't suppose you have an index on ts_interval_start_time?\nthere is an index. I mentioned this in my orginal posting.\n\nThanks,\nBrian\n\n\n", "msg_date": "Thu, 18 Jun 2009 17:40:22 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select max() much slower than select min()" }, { "msg_contents": "Brian Cox <[email protected]> wrote: \n> Kevin Grittner [[email protected]] wrote:\n>> Is there any correlation between ts_id and ts_interval_start_time?\n> only vaguely: increasing ts_interval_start_time implies increasing\n> ts_id but there may be many rows (100,000's) with the same\n> ts_interval_start_time\n> \n>> Perhaps if you tried min and max with different time ranges it\n>> would find a row on a backward scan faster. It'll take ten times\n>> as long if it has to scan through ten times as many rows to find a\n>> match.\n> it looks like there are fewer rows backwards than forwards:\n \nHmmm.... I was going to suggest possible bloat near the end of the\ntable, but the vacuum and reindex should have kept that at from being\na problem.\n \nThis might be an issue where disks are laid out so that the pages can\nbe read from start to end quickly; reading backwards might cause a lot\nmore rotational delay.\n \n>> I don't suppose you have an index on ts_interval_start_time?\n> there is an index. I mentioned this in my orginal posting.\n \nSorry I missed that. I was afraid that it might not use it because\nPostgreSQL doesn't yet recognize correlations between columns. If it\ndid, it might determine that the other index was better for this query\n(which may or may not be the case).\n \nCould you provide the output of VACUUM ANALYZE for these queries, so\nwe can compare expected to actual? Also, what is your statistics\ntarget for these (default_statistics_target if you haven't overridden\nthe specific columns involved)?\n \nI guess you could try something like this, too:\n \nselect max(ts_id) from (select ts_id from\nts_stats_transet_user_interval a \nwhere 0=0 and a.ts_interval_start_time >= '2009-6-16 01:00' and \na.ts_interval_start_time < '2009-6-16 02:00') x;\n \n(Untested, so you might need to fix some typo or oversight.)\n \nThe EXPLAIN ANALYZE of that might yield interesting information.\n \nIf that doesn't get you closer to something acceptable, you could\nconsider a functional index on the inverse of the ts_id column, and\nsearch for the negative of the min of that. Kinda ugly, but it might\nwork because the disk would be spinning in the right direction for\nyou.\n \n-Kevin\n", "msg_date": "Thu, 18 Jun 2009 20:10:53 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max() much slower than select min()" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> Kevin Grittner [[email protected]] wrote:\n>> Is there any correlation between ts_id and ts_interval_start_time?\n\n> only vaguely: increasing ts_interval_start_time implies increasing ts_id \n> but there may be many rows (100,000's) with the same ts_interval_start_time\n\nThat's the problem then. Notice what the query plan is doing: it's\nscanning the table in order by ts_id, looking for the first row that\nfalls within the ts_interval_start_time range. Evidently this\nparticular range is associated with smaller ts_ids, so you reach it a\nlot sooner in a ts_id ascending scan than a ts_id descending one.\n\nGiven the estimated size of the range, scanning with the\nts_interval_start_time index wouldn't be much fun either, since it would\nhave to examine all rows in the range to determine the min or max ts_id.\nYou could possibly twiddle the cost constants to make the planner choose\nthat plan instead, but it's still not going to be exactly speedy.\n\nSome experimentation suggests that it might help to provide a 2-column\nindex on (ts_id, ts_interval_start_time). This is still going to be\nscanned in order by ts_id, but it will be possible to check the\nts_interval_start_time condition in the index, eliminating a large\nnumber of useless trips to the heap. Whether this type of query is\nimportant enough to justify maintaining an extra index for is something\nyou'll have to decide for yourself...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jun 2009 10:26:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max() much slower than select min() " }, { "msg_contents": "On Fri, Jun 19, 2009 at 3:26 PM, Tom Lane<[email protected]> wrote:\n>\n> That's the problem then.  Notice what the query plan is doing: it's\n> scanning the table in order by ts_id, looking for the first row that\n> falls within the ts_interval_start_time range.  Evidently this\n> particular range is associated with smaller ts_ids, so you reach it a\n> lot sooner in a ts_id ascending scan than a ts_id descending one.\n>\n> Given the estimated size of the range, scanning with the\n> ts_interval_start_time index wouldn't be much fun either, since it would\n> have to examine all rows in the range to determine the min or max ts_id.\n> You could possibly twiddle the cost constants to make the planner choose\n> that plan instead, but it's still not going to be exactly speedy.\n\nIf your range of ts_interval_start_time is relatively static -- it\ndoesn't look like it in this case given that's only an hour, but... --\nthen one option is to create a partial index on \"ts_id\" with the\ncondition \"WHERE ts_interval_start_time >= 'foo' AND\nts_interval_start_time < 'bar' \".\n\nBut if your range of times is always going to vary then you're going\nto have a problem there.\n\nThere ought to be a way to use GIST to do this but I don't think we\nhave any way to combine two different columns of different types in a\nsingle GIST index except as a multicolumn index which doesn't do what\nyou want.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 19 Jun 2009 16:20:07 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max() much slower than select min()" } ]
[ { "msg_contents": "Hi,\n\nLooking at the XLogInsert() from 8.3 and 8.4, the 8.4\nversion includes a call to RecoveryInProgress() at\nthe top as well as a call to TRACE_POSTGRESQL_XLOG_INSERT().\nCould either of those have caused a context switch or\ncache flush resulting in worse performance.\n\nCheers,\nKen\n", "msg_date": "Fri, 19 Jun 2009 08:33:49 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.4 COPY performance regression on Solaris" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> Looking at the XLogInsert() from 8.3 and 8.4, the 8.4\n> version includes a call to RecoveryInProgress() at\n> the top as well as a call to TRACE_POSTGRESQL_XLOG_INSERT().\n> Could either of those have caused a context switch or\n> cache flush resulting in worse performance.\n\nHmm. TRACE_POSTGRESQL_XLOG_INSERT() should be a no-op (or at least,\nnone of the complainants have admitted to building with --enable-dtrace).\nRecoveryInProgress() should be just a quick test of a local boolean,\nso it's hard to believe that it costs anything noticeable; but if anyone\nwho is able to reproduce the problem wants to test this theory, try\ntaking out these lines\n\n\t/* cross-check on whether we should be here or not */\n\tif (RecoveryInProgress())\n\t\telog(FATAL, \"cannot make new WAL entries during recovery\");\n\nwhich are only a sanity check anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jun 2009 10:38:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 COPY performance regression on Solaris " } ]
[ { "msg_contents": "Hey folks,\n\nI'm new to all this stuff, and am sitting here with kSar looking at\nsome graphed results of some load tests we did, trying to figure\nthings out :-)\n\nWe got some unsatisfactory results in stressing our system, and now I\nhave to divine where the bottleneck is.\n\nWe did 4 tests, upping the load each time. The 3rd and 4th ones have\nall 8 cores pegged at about 95%. Yikes!\n\nIn the first test the processor running queue spikes at 7 and maybe\naverages 4 or 5\n\nIn the last test it spikes at 33 with an average maybe 25.\n\nLooks to me like it could be a CPU bottleneck. But I'm new at this :-)\n\nIs there a general rule of thumb \"if queue is longer than X, it is\nlikely a bottleneck?\"\n\nIn reading an IBM Redbook on Linux performance, I also see this :\n\"High numbers of context switches in connection with a large number of\ninterrupts can signal driver or application issues.\"\n\nOn my first test where the CPU is not pegged, context switching goes\nfrom about 3700 to about 4900, maybe averaging 4100\n\nOn the pegged test, the values are maybe 10% higher than that, maybe 15%.\n\nIt is an IBM 3550 with 8 cores, 2660.134 MHz (from dmesg), 32Gigs RAM\n\nthanks,\n-Alan\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Fri, 19 Jun 2009 11:59:59 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "processor running queue - general rule of thumb?" }, { "msg_contents": "Alan McKay wrote:\n> Hey folks,\n> We did 4 tests, upping the load each time. The 3rd and 4th ones have\n> all 8 cores pegged at about 95%. Yikes!\n>\n> In the first test the processor running queue spikes at 7 and maybe\n> averages 4 or 5\n>\n> In the last test it spikes at 33 with an average maybe 25.\n>\n> Looks to me like it could be a CPU bottleneck. But I'm new at this :-)\n>\n> Is there a general rule of thumb \"if queue is longer than X, it is\n> likely a bottleneck?\"\n>\n> In reading an IBM Redbook on Linux performance, I also see this :\n> \"High numbers of context switches in connection with a large number of\n> interrupts can signal driver or application issues.\"\n>\n> On my first test where the CPU is not pegged, context switching goes\n> from about 3700 to about 4900, maybe averaging 4100\n>\n>\n> \n\nWell the people here will need allot more information to figure out what \nis going on. \n\nWhat kind of Stress did you do???? is it a specific query causing the \nproblem in the test\nWhat kind of load?\nHow many simulated clients\nHow big is the database?\n\nNeed to see the postgresql.config\n\nWhat kind of IO Subsystem do you have ???\nwhat does vmstat show\n\nhave you look at wiki yet\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\n\n", "msg_date": "Fri, 19 Jun 2009 15:11:51 -0400", "msg_from": "justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processor running queue - general rule of thumb?" }, { "msg_contents": "On Fri, Jun 19, 2009 at 9:59 AM, Alan McKay<[email protected]> wrote:\n> Hey folks,\n>\n> I'm new to all this stuff, and am sitting here with kSar looking at\n> some graphed results of some load tests we did, trying to figure\n> things out :-)\n>\n> We got some unsatisfactory results in stressing our system, and now I\n> have to divine where the bottleneck is.\n>\n> We did 4 tests, upping the load each time.   The 3rd and 4th ones have\n> all 8 cores pegged at about 95%.  Yikes!\n>\n> In the first test the processor running queue spikes at 7 and maybe\n> averages 4 or 5\n>\n> In the last test it spikes at 33 with an average maybe 25.\n>\n> Looks to me like it could be a CPU bottleneck.  But I'm new at this :-)\n>\n> Is there a general rule of thumb \"if queue is longer than X, it is\n> likely a bottleneck?\"\n>\n> In reading an IBM Redbook on Linux performance, I also see this :\n> \"High numbers of context switches in connection with a large number of\n> interrupts can signal driver or application issues.\"\n>\n> On my first test where the CPU is not pegged, context switching goes\n> from about 3700 to about 4900, maybe averaging 4100\n\nThat's not too bad. If you see them in the 30k to 150k range, then\nworry about it.\n\n> On the pegged test, the values are maybe 10% higher than that, maybe 15%.\n\nThat's especially good news. Normally when you've got a problem, it\nwill increase in a geometric (or worse) way.\n\n> It is an IBM 3550 with 8 cores, 2660.134 MHz (from dmesg), 32Gigs RAM\n\nLike the other poster said, we likely don't have enough to tell you\nwhat's going on, but from what you've said here it sounds like you're\nmostly just CPU bound. Assuming you're reading the output of vmstat\nand top and other tools like that.\n", "msg_date": "Fri, 19 Jun 2009 13:34:28 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processor running queue - general rule of thumb?" }, { "msg_contents": "> Like the other poster said, we likely don't have enough to tell you\n> what's going on, but from what you've said here it sounds like you're\n> mostly just CPU bound.  Assuming you're reading the output of vmstat\n> and top and other tools like that.\n\nThanks. I used 'sadc' from the sysstat RPM (part of the sar suite) to\ncollect data, and it does collect Vm and other data like that from top\nand vmstat.\n\nI did not see any irregular activity in those areas.\n\nI realise I did not give you all enough details, which is why I worded\nmy question they way I did : \"is there a general rule of thumb for\nrunning queue\"\n\n\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Fri, 19 Jun 2009 15:48:48 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: processor running queue - general rule of thumb?" }, { "msg_contents": "BTW, our designer got the nytprofile or whatever it is called for Perl\nand found out that it was a problem with the POE library that was\nbeing used as a state-machine to drive the whole load suite. It was\ntaking something like 95% of the CPU time!\n\nOn Fri, Jun 19, 2009 at 11:59 AM, Alan McKay<[email protected]> wrote:\n> Hey folks,\n>\n> I'm new to all this stuff, and am sitting here with kSar looking at\n> some graphed results of some load tests we did, trying to figure\n> things out :-)\n>\n> We got some unsatisfactory results in stressing our system, and now I\n> have to divine where the bottleneck is.\n>\n> We did 4 tests, upping the load each time.   The 3rd and 4th ones have\n> all 8 cores pegged at about 95%.  Yikes!\n>\n> In the first test the processor running queue spikes at 7 and maybe\n> averages 4 or 5\n>\n> In the last test it spikes at 33 with an average maybe 25.\n>\n> Looks to me like it could be a CPU bottleneck.  But I'm new at this :-)\n>\n> Is there a general rule of thumb \"if queue is longer than X, it is\n> likely a bottleneck?\"\n>\n> In reading an IBM Redbook on Linux performance, I also see this :\n> \"High numbers of context switches in connection with a large number of\n> interrupts can signal driver or application issues.\"\n>\n> On my first test where the CPU is not pegged, context switching goes\n> from about 3700 to about 4900, maybe averaging 4100\n>\n> On the pegged test, the values are maybe 10% higher than that, maybe 15%.\n>\n> It is an IBM 3550 with 8 cores, 2660.134 MHz (from dmesg), 32Gigs RAM\n>\n> thanks,\n> -Alan\n>\n> --\n> “Don't eat anything you've ever seen advertised on TV”\n>         - Michael Pollan, author of \"In Defense of Food\"\n>\n\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Tue, 23 Jun 2009 16:41:53 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: processor running queue - general rule of thumb?" } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> Some experimentation suggests that it might help to provide a 2-column\n> index on (ts_id, ts_interval_start_time). This is still going to be\n> scanned in order by ts_id, but it will be possible to check the\n> ts_interval_start_time condition in the index, eliminating a large\n> number of useless trips to the heap. Whether this type of query is\n> important enough to justify maintaining an extra index for is something\n> you'll have to decide for yourself...\n\nThanks to all for the analysis and suggestions. Since the number of rows \nin an hour < ~500,000, brute force looks to be a fast solution:\n\nselect ts_id from ... where ts_interval_start_time >= ... and ...\n\nThis query runs very fast as does a single pass through the ids to find \nthe min and max.\n\nBrian\n", "msg_date": "Fri, 19 Jun 2009 13:05:46 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select max() much slower than select min()" }, { "msg_contents": "On Fri, Jun 19, 2009 at 1:05 PM, Brian Cox<[email protected]> wrote:\n> Thanks to all for the analysis and suggestions. Since the number of rows in\n> an hour < ~500,000, brute force looks to be a fast solution:\n>\n> select ts_id from ... where ts_interval_start_time >= ... and ...\n>\n> This query runs very fast as does a single pass through the ids to find the\n> min and max.\n\nAlong those lines, couldn't you just have the DB do the work?\n\nselect max(ts_id), min(ts_id) from ... where ts_interval_start_time >=\n... and ...\n\nThen you don't have to transfer 500k ids across the network...\n\n-Dave\n", "msg_date": "Fri, 19 Jun 2009 13:41:39 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max() much slower than select min()" } ]
[ { "msg_contents": "David Rees [[email protected]] wrote:\n> Along those lines, couldn't you just have the DB do the work?\n> \n> select max(ts_id), min(ts_id) from ... where ts_interval_start_time >=\n> ... and ...\n> \n> Then you don't have to transfer 500k ids across the network...\nI guess you didn't read the entire thread: I started it because the \nquery you suggest took 15 mins to complete.\n\nBrian\n\n", "msg_date": "Fri, 19 Jun 2009 14:05:36 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select max() much slower than select min()" }, { "msg_contents": "On Fri, Jun 19, 2009 at 2:05 PM, Brian Cox<[email protected]> wrote:\n> David Rees [[email protected]] wrote:\n>>\n>> Along those lines, couldn't you just have the DB do the work?\n>>\n>> select max(ts_id), min(ts_id) from ... where ts_interval_start_time >=\n>> ... and ...\n>>\n>> Then you don't have to transfer 500k ids across the network...\n>\n> I guess you didn't read the entire thread: I started it because the query\n> you suggest took 15 mins to complete.\n\nI read the whole thing and just scanned through it again - I didn't\nsee any queries where you put both the min and max into the same\nquery, but perhaps I missed it. Then again - I don't quite see why\nyour brute force method is any faster than using a min or max, either.\n It would be interesting to see the analyze output as apparently\nscanning on the ts_interval_start_time is a lot faster than scanning\nthe pkey (even though Tom thought that it would not be much difference\nsince either way you have to hit the heap a lot).\n\nMy thought was that putting both the min and max into the query would\nencourage Pg to use the same index as the brute force method.\nIf not, you could still put the ts_ids into a temporary table using\nyour brute force query and use that to avoid the overhead transferring\n500k ids over the network.\n\n-Dave\n", "msg_date": "Fri, 19 Jun 2009 16:39:23 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max() much slower than select min()" } ]
[ { "msg_contents": "Hey folks !\n\n\nStill kind of analyzing the situation , I realized that I do have a\nreasonably high shared_memory and effective_cache_size , though if the same\nquery is being run in a number of times ~100-200 concurrent connection it is\nnot being cached .\n\nShould PG realize that if the table data is same should the query result set\nalso be the same ? Instead each query takes up to 1-2 seconds .\n\nWhere do I see what the PG does ? I can see now the query's that take long\ntime ,but do not have information about what the optimizer does neither when\nthe DB decides about to table scan or cache ?\n\ncheers,\nPeter\n\nHey folks ! Still kind of analyzing the situation , I realized that I do have a reasonably high shared_memory and effective_cache_size , though if the same query is being run in a number of times ~100-200 concurrent connection it is not being cached . \nShould PG realize that if the table data is same should the query result set also be the same ? Instead each query takes up to 1-2 seconds . Where do I see what the PG does ? I can see now the query's that take long time ,but do not have information about what the optimizer does neither when the DB decides about to table scan or cache ?\ncheers,Peter", "msg_date": "Sun, 21 Jun 2009 12:54:40 +0200", "msg_from": "Peter Alban <[email protected]>", "msg_from_op": true, "msg_subject": "same query in high number of times" }, { "msg_contents": "On Sun, Jun 21, 2009 at 6:54 AM, Peter Alban<[email protected]> wrote:\n> Should PG realize that if the table data is same should the query result set\n> also be the same ?\n\nNo. That's not so easy to implement as you might think. Saving the\nresults of each previous query in case someone issues the same query\nagain without having changed anything in the meantime would probably\ncost more in performance on average that you'd get out of it.\n\n> Where do I see what the PG does ? I can see now the query's that take long\n> time ,but do not have information about what the optimizer does neither when\n> the DB decides about to table scan or cache ?\n\nCan't you get this from EXPLAIN and EXPLAIN ANALYZE?\n\n...Robert\n", "msg_date": "Sun, 21 Jun 2009 13:42:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "Hi,\n\nHere is the query :\n*duration: 2533.734 ms statement: *\n\n*SELECT news.url_text,news.title, comments.name, comments.createdate,\ncomments.user_id, comments.comment FROM news, comments WHERE comments.cid=\nnews.id AND comments.published='1' GROUP BY news.url_text,news.title\ncomments.name, comments.createdate, comments.user_id, comments.comment ORDER\nBY comments.createdate DESC LIMIT 3\n*\n\nAnd here is the query plan :\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=4313.54..4313.55 rows=3 width=595) (actual\ntime=288.525..288.528 rows=3 loops=1)\n -> Sort (cost=4313.54..4347.26 rows=13486 width=595) (actual\ntime=288.523..288.523 rows=3 loops=1)\n Sort Key: comments.createdate\n -> HashAggregate (cost=3253.60..3388.46 rows=13486 width=595)\n(actual time=137.521..148.132 rows=13415 loops=1)\n -> Hash Join (cost=1400.73..3051.31 rows=13486 width=595)\n(actual time=14.298..51.049 rows=13578 loops=1)\n Hash Cond: (\"outer\".cid = \"inner\".id)\n -> Seq Scan on comments (cost=0.00..1178.72\nrows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n Filter: (published = 1)\n -> Hash (cost=1391.18..1391.18 rows=3818 width=81)\n(actual time=14.268..14.268 rows=3818 loops=1)\n -> Seq Scan on news (cost=0.00..1391.18\nrows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n\nThe same is being requested from different sessions . So why is it not being\ncached .\n*\npostgresq.conf --current --\nshared_buffers = 410000 # min 16 or\nmax_connections*2, 8KB each\ntemp_buffers = 11000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 51024 # min 64, size in KB\n#maintenance_work_mem = 16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 692674 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page\nfetch\n # cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000*\n\ncheers,\nPeter\nOn Sun, Jun 21, 2009 at 7:42 PM, Robert Haas <[email protected]> wrote:\n\n> On Sun, Jun 21, 2009 at 6:54 AM, Peter Alban<[email protected]>\n> wrote:\n> > Should PG realize that if the table data is same should the query result\n> set\n> > also be the same ?\n>\n> No. That's not so easy to implement as you might think. Saving the\n> results of each previous query in case someone issues the same query\n> again without having changed anything in the meantime would probably\n> cost more in performance on average that you'd get out of it.\n>\n> > Where do I see what the PG does ? I can see now the query's that take\n> long\n> > time ,but do not have information about what the optimizer does neither\n> when\n> > the DB decides about to table scan or cache ?\n>\n> Can't you get this from EXPLAIN and EXPLAIN ANALYZE?\n>\n> ...Robert\n>\n\nHi,Here is the query  : duration: 2533.734 ms  statement: SELECT news.url_text,news.title, comments.name, comments.createdate, comments.user_id, comments.comment FROM news, comments WHERE comments.cid=news.id  AND comments.published='1' GROUP BY news.url_text,news.title comments.name, comments.createdate, comments.user_id, comments.comment ORDER BY comments.createdate DESC LIMIT 3\nAnd here is the query plan :                                                               QUERY PLAN                                                               ----------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual time=288.525..288.528 rows=3 loops=1)   ->  Sort  (cost=4313.54..4347.26 rows=13486 width=595) (actual time=288.523..288.523 rows=3 loops=1)         Sort Key: comments.createdate\n         ->  HashAggregate  (cost=3253.60..3388.46 rows=13486 width=595) (actual time=137.521..148.132 rows=13415 loops=1)               ->  Hash Join  (cost=1400.73..3051.31 rows=13486 width=595) (actual time=14.298..51.049 rows=13578 loops=1)\n                     Hash Cond: (\"outer\".cid = \"inner\".id)                     ->  Seq Scan on comments  (cost=0.00..1178.72 rows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n                           Filter: (published = 1)                     ->  Hash  (cost=1391.18..1391.18 rows=3818 width=81) (actual time=14.268..14.268 rows=3818 loops=1)                           ->  Seq Scan on news  (cost=0.00..1391.18 rows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\nThe same is being requested from different sessions . So why is it not being cached .postgresq.conf --current --shared_buffers = 410000                         # min 16 or max_connections*2, 8KB each\ntemp_buffers = 11000                    # min 100, 8KB each#max_prepared_transactions = 5          # can be 0 or more# note: increasing max_prepared_transactions costs ~600 bytes of shared memory# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 51024                        # min 64, size in KB#maintenance_work_mem = 16384           # min 1024, size in KB#max_stack_depth = 2048                 # min 100, size in KB#---------------------------------------------------------------------------\n# QUERY TUNING#---------------------------------------------------------------------------# - Planner Method Configuration -#enable_bitmapscan = on#enable_hashagg = on#enable_hashjoin = on#enable_indexscan = on\n#enable_mergejoin = on#enable_nestloop = on#enable_seqscan = on#enable_sort = on#enable_tidscan = on# - Planner Cost Constants -effective_cache_size = 692674           # typically 8KB each\n#random_page_cost = 4                   # units are one sequential page fetch                                        # cost#cpu_tuple_cost = 0.01                  # (same)#cpu_index_tuple_cost = 0.001           # (same)\n#cpu_operator_cost = 0.0025             # (same)# - Genetic Query Optimizer -#geqo = on#geqo_threshold = 12#geqo_effort = 5                        # range 1-10#geqo_pool_size = 0                     # selects default based on effort\n#geqo_generations = 0                   # selects default based on effort#geqo_selection_bias = 2.0              # range 1.5-2.0# - Other Planner Options -#default_statistics_target = 10         # range 1-1000\ncheers,PeterOn Sun, Jun 21, 2009 at 7:42 PM, Robert Haas <[email protected]> wrote:\nOn Sun, Jun 21, 2009 at 6:54 AM, Peter Alban<[email protected]> wrote:\n> Should PG realize that if the table data is same should the query result set\n> also be the same ?\n\nNo.  That's not so easy to implement as you might think.  Saving the\nresults of each previous query in case someone issues the same query\nagain without having changed anything in the meantime would probably\ncost more in performance on average that you'd get out of it.\n\n> Where do I see what the PG does ? I can see now the query's that take long\n> time ,but do not have information about what the optimizer does neither when\n> the DB decides about to table scan or cache ?\n\nCan't you get this from EXPLAIN and EXPLAIN ANALYZE?\n\n...Robert", "msg_date": "Sun, 21 Jun 2009 20:28:15 +0200", "msg_from": "Peter Alban <[email protected]>", "msg_from_op": true, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "On Sun, Jun 21, 2009 at 12:28 PM, Peter Alban<[email protected]> wrote:\n> Hi,\n>\n> Here is the query  :\n> duration: 2533.734 ms  statement:\n\nSNIP\n\n>  Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\n> time=288.525..288.528 rows=3 loops=1)\n\nAccording to this query plan, your query is taking up 288\nmilliseconds. I'm guessing the rest of the time is actually is spent\ntransferring data.\n", "msg_date": "Sun, 21 Jun 2009 16:06:41 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "On Mon, Jun 22, 2009 at 12:06 AM, Scott Marlowe<[email protected]> wrote:\n> On Sun, Jun 21, 2009 at 12:28 PM, Peter Alban<[email protected]> wrote:\n>> Hi,\n>>\n>> Here is the query  :\n>> duration: 2533.734 ms  statement:\n>\n> SNIP\n>\n>>  Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\n>> time=288.525..288.528 rows=3 loops=1)\n>\n> According to this query plan, your query is taking up 288\n> milliseconds.  I'm guessing the rest of the time is actually is spent\n> transferring data.\n\nHuuuuuu ...\nThe cost is _certainly_ not the time in ms.\nSee the planner cost constants in a config file, or in any good documentation.\n\n-- \nF4FQM\nKerunix Flan\nLaurent Laborde\n", "msg_date": "Tue, 23 Jun 2009 10:52:08 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "On Tue, Jun 23, 2009 at 10:52 AM, Laurent Laborde<[email protected]> wrote:\n> On Mon, Jun 22, 2009 at 12:06 AM, Scott Marlowe<[email protected]> wrote:\n>> On Sun, Jun 21, 2009 at 12:28 PM, Peter Alban<[email protected]> wrote:\n>>> Hi,\n>>>\n>>> Here is the query  :\n>>> duration: 2533.734 ms  statement:\n>>\n>> SNIP\n>>\n>>>  Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\n>>> time=288.525..288.528 rows=3 loops=1)\n>>\n>> According to this query plan, your query is taking up 288\n>> milliseconds.  I'm guessing the rest of the time is actually is spent\n>> transferring data.\n>\n> Huuuuuu ...\n> The cost is _certainly_ not the time in ms.\n> See the planner cost constants in a config file, or in any good documentation.\n\nWoooops... cost... time... my mistake ... :)\n*duck and cover*\n\n-- \nF4FQM\nKerunix Flan\nLaurent Laborde\n", "msg_date": "Tue, 23 Jun 2009 10:55:49 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" } ]
[ { "msg_contents": "With out knowing how much memory for each of those settings and how much work_mem for each connection its kinda hard to tell what is going. \nAlso need version for PG, OS, how big the tables are, Also would be nice to see the query itself with explain and analyze \n\nPG does not cache the results from a query but the tables itself. \n\nThe table could be completely cached but there may be some nasty Nested loops causing the problem.\n\nWhat are you expecting the query time to be??\n\ncheck out http://wiki.postgresql.org/wiki/Performance_Optimization there is allot of info on how to tune, and diagnose problem queries \n\n---- Message from mailto:[email protected] Peter Alban [email protected] at 06-21-2009 12:54:40 PM ------\n\nHey folks ! \n\n\nStill kind of analyzing the situation , I realized that I do have a reasonably high shared_memory and effective_cache_size , though if the same query is being run in a number of times ~100-200 concurrent connection it is not being cached . \n\nShould PG realize that if the table data is same should the query result set also be the same ? Instead each query takes up to 1-2 seconds . \n\nWhere do I see what the PG does ? I can see now the query's that take long time ,but do not have information about what the optimizer does neither when the DB decides about to table scan or cache ?\n\ncheers,\nPeter\n\n\n\n\n\nWith out knowing how much memory for each of those settings and how much work_mem  for each connection its kinda hard to tell what is going.  Also need version for PG, OS,  how big the tables are,  Also would be nice to see the query itself with explain and analyze PG does not cache the results from a query but the tables itself.  The table could be completely cached but there may be some nasty Nested loops causing the problem.What are you expecting the query time to be??check out http://wiki.postgresql.org/wiki/Performance_Optimization  there is allot of info on how to tune, and diagnose problem queries   ---- Message from Peter Alban <[email protected]> at 06-21-2009 12:54:40 PM ------Hey folks ! Still kind of analyzing the situation , I realized that I do have a reasonably high shared_memory and effective_cache_size , though if the same query is being run in a number of times ~100-200 concurrent connection it is not being cached . \nShould PG realize that if the table data is same should the query result set also be the same ? Instead each query takes up to 1-2 seconds . Where do I see what the PG does ? I can see now the query's that take long time ,but do not have information about what the optimizer does neither when the DB decides about to table scan or cache ?\ncheers,Peter", "msg_date": "Sun, 21 Jun 2009 10:01:05 -0400", "msg_from": "\"Justin Graf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: same query in high number of times" } ]
[ { "msg_contents": "Peter Alban wrote: \n\nduration: 2533.734 ms statement: \n \n SELECT news.url_text,news.title, http://comments.name comments.name, comments.createdate, comments.user_id, comments.comment FROM news, comments WHERE comments.cid=http://news.id news.id AND comments.published='1' GROUP BY news.url_text,news.title http://comments.name comments.name, comments.createdate, comments.user_id, comments.comment ORDER BY comments.createdate DESC LIMIT 3\n \n \nAnd here is the query plan : \n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=4313.54..4313.55 rows=3 width=595) (actual time=288.525..288.528 rows=3 loops=1)\n - Sort (cost=4313.54..4347.26 rows=13486 width=595) (actual time=288.523..288.523 rows=3 loops=1)\n Sort Key: comments.createdate\n - HashAggregate (cost=3253.60..3388.46 rows=13486 width=595) (actual time=137.521..148.132 rows=13415 loops=1)\n - Hash Join (cost=1400.73..3051.31 rows=13486 width=595) (actual time=14.298..51.049 rows=13578 loops=1)\n Hash Cond: (\"outer\".cid = \"inner\".id)\n - Seq Scan on comments (cost=0.00..1178.72 rows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n Filter: (published = 1)\n - Hash (cost=1391.18..1391.18 rows=3818 width=81) (actual time=14.268..14.268 rows=3818 loops=1)\n - Seq Scan on news (cost=0.00..1391.18 rows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n \nThe same is being requested from different sessions . So why is it not being cached .\n\n\nBecause the query results are not cached only the RAW tables are. The query is rerun every time it is requested. \n\nWhat is the group by clause accomplishing??? \nThe sorting and hash Aggregate is eating up all the time\n\n\n\nwork_mem = 51024 # min 64, size in KB\n \nThats allot memory dedicated to work mem if you have 30 connections open this could eat up 1.5gigs pushing the data out of cache. \n\n\n\n\n\n\n\n\n\n\nPeter Alban wrote:\nduration: 2533.734 ms  statement: \n\nSELECT news.url_text,news.title, comments.name, comments.createdate,\ncomments.user_id, comments.comment FROM news, comments WHERE\ncomments.cid=news.id \nAND comments.published='1' GROUP BY news.url_text,news.title comments.name,\ncomments.createdate, comments.user_id, comments.comment ORDER BY\ncomments.createdate DESC LIMIT 3\n\n\nAnd here is the query plan : \n                                                              QUERY\nPLAN                                                               \n----------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\ntime=288.525..288.528 rows=3 loops=1)\n   ->  Sort  (cost=4313.54..4347.26 rows=13486 width=595) (actual\ntime=288.523..288.523 rows=3 loops=1)\n         Sort Key: comments.createdate\n         ->  HashAggregate  (cost=3253.60..3388.46 rows=13486\nwidth=595) (actual time=137.521..148.132 rows=13415 loops=1)\n               ->  Hash Join  (cost=1400.73..3051.31 rows=13486\nwidth=595) (actual time=14.298..51.049 rows=13578 loops=1)\n                     Hash Cond: (\"outer\".cid = \"inner\".id)\n                     ->  Seq Scan on comments  (cost=0.00..1178.72\nrows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n                           Filter: (published = 1)\n                     ->  Hash  (cost=1391.18..1391.18 rows=3818\nwidth=81) (actual time=14.268..14.268 rows=3818 loops=1)\n                           ->  Seq Scan on news  (cost=0.00..1391.18\nrows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n\nThe same is being requested from different sessions . So why is it not\nbeing cached .\n\n\n\nBecause the query results are not cached only the RAW tables are.   The\nquery is rerun every time it is requested. \n\nWhat is the group by clause accomplishing???  \nThe sorting and hash Aggregate is eating up all the time\n\nwork_mem = 51024                        # min 64, size\nin KB\n\n\nThats allot memory dedicated to work mem if you have 30 connections\nopen this could eat up 1.5gigs pushing the data out of cache.", "msg_date": "Sun, 21 Jun 2009 16:01:33 -0400", "msg_from": "\"Justin Graf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "On Sun, Jun 21, 2009 at 9:01 PM, Justin Graf<[email protected]> wrote:\n> work_mem = 51024                        # min 64, size in KB\n>\n> Thats allot memory dedicated to work mem if you have 30 connections open\n> this could eat up 1.5gigs pushing the data out of cache.\n\nI thought work memory is max memory that can be allocated per\nconnection for sorting, etc. I think it is not allocated when\nconnection is opened, but only on 'if needed' basis.\n\n\n-- \nGJ\n", "msg_date": "Sun, 21 Jun 2009 21:36:01 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "---- Message from mailto:[email protected] Grzegorz Jaśkiewicz [email protected] at 06-21-2009 09:36:01 PM ------\n\nOn Sun, Jun 21, 2009 at 9:01 PM, Justin [email protected] wrote:\n work_mem = 51024 # min 64, size in KB\n\n Thats allot memory dedicated to work mem if you have 30 connections open\n this could eat up 1.5gigs pushing the data out of cache.\n\nI thought work memory is max memory that can be allocated per\nconnection for sorting, etc. I think it is not allocated when\nconnection is opened, but only on 'if needed' basis.\n\n\n\n---- Message from Grzegorz Jaśkiewicz <[email protected]> at 06-21-2009 09:36:01 PM ------On Sun, Jun 21, 2009 at 9:01 PM, Justin Graf<[email protected]> wrote:\n> work_mem = 51024                        # min 64, size in KB\n>\n> Thats allot memory dedicated to work mem if you have 30 connections open\n> this could eat up 1.5gigs pushing the data out of cache.\n\nI thought work memory is max memory that can be allocated per\nconnection for sorting, etc. I think it is not allocated when\nconnection is opened, but only on 'if needed' basis.", "msg_date": "Sun, 21 Jun 2009 16:51:24 -0400", "msg_from": "\"Justin Graf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "On Sun, Jun 21, 2009 at 10:01 PM, Justin Graf <[email protected]>wrote:\n\n> Peter Alban wrote:\n>\n> *duration: 2533.734 ms statement: *\n>\n> *SELECT news.url_text,news.title, comments.name, comments.createdate,\n> comments.user_id, comments.comment FROM news, comments WHERE comments.cid=\n> news.id AND comments.published='1' GROUP BY news.url_text,news.title\n> comments.name, comments.createdate, comments.user_id, comments.comment\n> ORDER BY comments.createdate DESC LIMIT 3\n> *\n>\n> And here is the query plan :\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=4313.54..4313.55 rows=3 width=595) (actual\n> time=288.525..288.528 rows=3 loops=1)\n> -> Sort (cost=4313.54..4347.26 rows=13486 width=595) (actual\n> time=288.523..288.523 rows=3 loops=1)\n> Sort Key: comments.createdate\n> -> HashAggregate (cost=3253.60..3388.46 rows=13486 width=595)\n> (actual time=137.521..148.132 rows=13415 loops=1)\n> -> Hash Join (cost=1400.73..3051.31 rows=13486 width=595)\n> (actual time=14.298..51.049 rows=13578 loops=1)\n> Hash Cond: (\"outer\".cid = \"inner\".id)\n> -> Seq Scan on comments (cost=0.00..1178.72\n> rows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n> Filter: (published = 1)\n> -> Hash (cost=1391.18..1391.18 rows=3818 width=81)\n> (actual time=14.268..14.268 rows=3818 loops=1)\n> -> Seq Scan on news (cost=0.00..1391.18\n> rows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n>\n> The same is being requested from different sessions . So why is it not\n> being cached .\n>\n>\n>\n> Because the query results are not cached only the RAW tables are. The\n> query is rerun every time it is requested.\n>\n> What is the group by clause accomplishing???\n> The sorting and hash Aggregate is eating up all the time\n>\n\n*So this should mean that having say a 5 mb table in memory doing such query\nabove takes 2 secs in memory ? *\n\nAssuming that, we probably have really slow memory :)\n\nBesides , the query makes less sense to me , but I dont write the queries\n(yet) simply looking at the server side .\nSo do you suggest to tune the queries or shall I rather look for other\nmonitoring tools ?\ncheers,\nPeter\n\n\n>\n>\n> *work_mem = 51024 # min 64, size in KB\n> *\n>\n>\n> Thats allot memory dedicated to work mem if you have 30 connections open\n> this could eat up 1.5gigs pushing the data out of cache.\n>\n>\n>\n>\n>\n>\n>\n\nOn Sun, Jun 21, 2009 at 10:01 PM, Justin Graf <[email protected]> wrote:\n\nPeter Alban wrote:\nduration: 2533.734 ms  statement: \n\nSELECT news.url_text,news.title, comments.name, comments.createdate,\ncomments.user_id, comments.comment FROM news, comments WHERE\ncomments.cid=news.id \nAND comments.published='1' GROUP BY news.url_text,news.title comments.name,\ncomments.createdate, comments.user_id, comments.comment ORDER BY\ncomments.createdate DESC LIMIT 3\n\n\nAnd here is the query plan : \n                                                              QUERY\nPLAN                                                               \n----------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\ntime=288.525..288.528 rows=3 loops=1)\n   ->  Sort  (cost=4313.54..4347.26 rows=13486 width=595) (actual\ntime=288.523..288.523 rows=3 loops=1)\n         Sort Key: comments.createdate\n         ->  HashAggregate  (cost=3253.60..3388.46 rows=13486\nwidth=595) (actual time=137.521..148.132 rows=13415 loops=1)\n               ->  Hash Join  (cost=1400.73..3051.31 rows=13486\nwidth=595) (actual time=14.298..51.049 rows=13578 loops=1)\n                     Hash Cond: (\"outer\".cid = \"inner\".id)\n                     ->  Seq Scan on comments  (cost=0.00..1178.72\nrows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n                           Filter: (published = 1)\n                     ->  Hash  (cost=1391.18..1391.18 rows=3818\nwidth=81) (actual time=14.268..14.268 rows=3818 loops=1)\n                           ->  Seq Scan on news  (cost=0.00..1391.18\nrows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n\nThe same is being requested from different sessions . So why is it not\nbeing cached .\n\n\n\nBecause the query results are not cached only the RAW tables are.   The\nquery is rerun every time it is requested. \n\nWhat is the group by clause accomplishing???  \nThe sorting and hash Aggregate is eating up all the timeSo this should mean that having say a 5 mb table in memory doing such query above takes 2 secs in memory ? \nAssuming that, we probably have really slow memory  :) Besides , the query makes less sense to me , but I dont write the queries (yet) simply looking at the server side  .So do you suggest to tune the queries or shall I rather look for other monitoring tools ? \ncheers,Peter \n\n\nwork_mem = 51024                        # min 64, size\nin KB\n\n\nThats allot memory dedicated to work mem if you have 30 connections\nopen this could eat up 1.5gigs pushing the data out of cache.", "msg_date": "Sun, 21 Jun 2009 22:59:49 +0200", "msg_from": "Peter Alban <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "On Sun, Jun 21, 2009 at 4:59 PM, Peter Alban<[email protected]> wrote:\n>\n>\n> On Sun, Jun 21, 2009 at 10:01 PM, Justin Graf <[email protected]>\n> wrote:\n>>\n>> Peter Alban wrote:\n>>\n>> duration: 2533.734 ms  statement:\n>>\n>> SELECT news.url_text,news.title, comments.name, comments.createdate,\n>> comments.user_id, comments.comment FROM news, comments WHERE\n>> comments.cid=news.id  AND comments.published='1' GROUP BY\n>> news.url_text,news.title comments.name, comments.createdate,\n>> comments.user_id, comments.comment ORDER BY comments.createdate DESC LIMIT 3\n>>\n>>\n>> And here is the query plan :\n>>                                                               QUERY\n>> PLAN\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------\n>>  Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\n>> time=288.525..288.528 rows=3 loops=1)\n>>    ->  Sort  (cost=4313.54..4347.26 rows=13486 width=595) (actual\n>> time=288.523..288.523 rows=3 loops=1)\n>>          Sort Key: comments.createdate\n>>          ->  HashAggregate  (cost=3253.60..3388.46 rows=13486 width=595)\n>> (actual time=137.521..148.132 rows=13415 loops=1)\n>>                ->  Hash Join  (cost=1400.73..3051.31 rows=13486 width=595)\n>> (actual time=14.298..51.049 rows=13578 loops=1)\n>>                      Hash Cond: (\"outer\".cid = \"inner\".id)\n>>                      ->  Seq Scan on comments  (cost=0.00..1178.72\n>> rows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n>>                            Filter: (published = 1)\n>>                      ->  Hash  (cost=1391.18..1391.18 rows=3818 width=81)\n>> (actual time=14.268..14.268 rows=3818 loops=1)\n>>                            ->  Seq Scan on news  (cost=0.00..1391.18\n>> rows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n>>\n>> The same is being requested from different sessions . So why is it not\n>> being cached .\n>>\n>>\n>> Because the query results are not cached only the RAW tables are.   The\n>> query is rerun every time it is requested.\n>>\n>> What is the group by clause accomplishing???\n>> The sorting and hash Aggregate is eating up all the time\n>\n> So this should mean that having say a 5 mb table in memory doing such query\n> above takes 2 secs in memory ?\n\nNope. But as others have pointed out, you need to figure out why it's\ntaking 2.5 s but EXPLAIN ANALYZE is only saying 300 ms.\n\nThere's other things you can do to optimize this query; for example:\n\n1. Try creating an index on comments (createdate), and don't forget to\nANALYZE the table afterward, or\n\n2. Modify the query to remove the probably-unnecessary GROUP BY.\n\nBut figuring out the times may be the first thing. My guess is that\nthe 2.5 s time is a time from your logs, maybe at a time when the\nsystem was busy, and the 300 ms time was what you got it when you ran\nit some other time. But maybe there's some other explanation. You\nshould try to figure it out.\n\n...Robert\n", "msg_date": "Sun, 21 Jun 2009 23:23:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "hey folks !\n\neventually the removing of the group by did improve but still my concern is\nwhy cant we take the result from memory given its same resultset .\nBut I keep pusing for the developers to move to memcached so we overcome\nthis limitation .\n\ncheers,\nPeter\n\nOn Mon, Jun 22, 2009 at 5:23 AM, Robert Haas <[email protected]> wrote:\n\n> On Sun, Jun 21, 2009 at 4:59 PM, Peter Alban<[email protected]>\n> wrote:\n> >\n> >\n> > On Sun, Jun 21, 2009 at 10:01 PM, Justin Graf <[email protected]>\n> > wrote:\n> >>\n> >> Peter Alban wrote:\n> >>\n> >> duration: 2533.734 ms statement:\n> >>\n> >> SELECT news.url_text,news.title, comments.name, comments.createdate,\n> >> comments.user_id, comments.comment FROM news, comments WHERE\n> >> comments.cid=news.id AND comments.published='1' GROUP BY\n> >> news.url_text,news.title comments.name, comments.createdate,\n> >> comments.user_id, comments.comment ORDER BY comments.createdate DESC\n> LIMIT 3\n> >>\n> >>\n> >> And here is the query plan :\n> >> QUERY\n> >> PLAN\n> >>\n> >>\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> >> Limit (cost=4313.54..4313.55 rows=3 width=595) (actual\n> >> time=288.525..288.528 rows=3 loops=1)\n> >> -> Sort (cost=4313.54..4347.26 rows=13486 width=595) (actual\n> >> time=288.523..288.523 rows=3 loops=1)\n> >> Sort Key: comments.createdate\n> >> -> HashAggregate (cost=3253.60..3388.46 rows=13486 width=595)\n> >> (actual time=137.521..148.132 rows=13415 loops=1)\n> >> -> Hash Join (cost=1400.73..3051.31 rows=13486\n> width=595)\n> >> (actual time=14.298..51.049 rows=13578 loops=1)\n> >> Hash Cond: (\"outer\".cid = \"inner\".id)\n> >> -> Seq Scan on comments (cost=0.00..1178.72\n> >> rows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n> >> Filter: (published = 1)\n> >> -> Hash (cost=1391.18..1391.18 rows=3818\n> width=81)\n> >> (actual time=14.268..14.268 rows=3818 loops=1)\n> >> -> Seq Scan on news (cost=0.00..1391.18\n> >> rows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n> >>\n> >> The same is being requested from different sessions . So why is it not\n> >> being cached .\n> >>\n> >>\n> >> Because the query results are not cached only the RAW tables are. The\n> >> query is rerun every time it is requested.\n> >>\n> >> What is the group by clause accomplishing???\n> >> The sorting and hash Aggregate is eating up all the time\n> >\n> > So this should mean that having say a 5 mb table in memory doing such\n> query\n> > above takes 2 secs in memory ?\n>\n> Nope. But as others have pointed out, you need to figure out why it's\n> taking 2.5 s but EXPLAIN ANALYZE is only saying 300 ms.\n>\n> There's other things you can do to optimize this query; for example:\n>\n> 1. Try creating an index on comments (createdate), and don't forget to\n> ANALYZE the table afterward, or\n>\n> 2. Modify the query to remove the probably-unnecessary GROUP BY.\n>\n> But figuring out the times may be the first thing. My guess is that\n> the 2.5 s time is a time from your logs, maybe at a time when the\n> system was busy, and the 300 ms time was what you got it when you ran\n> it some other time. But maybe there's some other explanation. You\n> should try to figure it out.\n>\n> ...Robert\n>\n\nhey folks ! eventually the removing of the group by did improve but still my concern is why cant we take the result from memory given its same resultset . But I keep pusing for the developers to move to memcached so we overcome this limitation . \ncheers,PeterOn Mon, Jun 22, 2009 at 5:23 AM, Robert Haas <[email protected]> wrote:\nOn Sun, Jun 21, 2009 at 4:59 PM, Peter Alban<[email protected]> wrote:\n>\n>\n> On Sun, Jun 21, 2009 at 10:01 PM, Justin Graf <[email protected]>\n> wrote:\n>>\n>> Peter Alban wrote:\n>>\n>> duration: 2533.734 ms  statement:\n>>\n>> SELECT news.url_text,news.title, comments.name, comments.createdate,\n>> comments.user_id, comments.comment FROM news, comments WHERE\n>> comments.cid=news.id  AND comments.published='1' GROUP BY\n>> news.url_text,news.title comments.name, comments.createdate,\n>> comments.user_id, comments.comment ORDER BY comments.createdate DESC LIMIT 3\n>>\n>>\n>> And here is the query plan :\n>>                                                               QUERY\n>> PLAN\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------\n>>  Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\n>> time=288.525..288.528 rows=3 loops=1)\n>>    ->  Sort  (cost=4313.54..4347.26 rows=13486 width=595) (actual\n>> time=288.523..288.523 rows=3 loops=1)\n>>          Sort Key: comments.createdate\n>>          ->  HashAggregate  (cost=3253.60..3388.46 rows=13486 width=595)\n>> (actual time=137.521..148.132 rows=13415 loops=1)\n>>                ->  Hash Join  (cost=1400.73..3051.31 rows=13486 width=595)\n>> (actual time=14.298..51.049 rows=13578 loops=1)\n>>                      Hash Cond: (\"outer\".cid = \"inner\".id)\n>>                      ->  Seq Scan on comments  (cost=0.00..1178.72\n>> rows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n>>                            Filter: (published = 1)\n>>                      ->  Hash  (cost=1391.18..1391.18 rows=3818 width=81)\n>> (actual time=14.268..14.268 rows=3818 loops=1)\n>>                            ->  Seq Scan on news  (cost=0.00..1391.18\n>> rows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n>>\n>> The same is being requested from different sessions . So why is it not\n>> being cached .\n>>\n>>\n>> Because the query results are not cached only the RAW tables are.   The\n>> query is rerun every time it is requested.\n>>\n>> What is the group by clause accomplishing???\n>> The sorting and hash Aggregate is eating up all the time\n>\n> So this should mean that having say a 5 mb table in memory doing such query\n> above takes 2 secs in memory ?\n\nNope.  But as others have pointed out, you need to figure out why it's\ntaking 2.5 s but EXPLAIN ANALYZE is only saying 300 ms.\n\nThere's other things you can do to optimize this query; for example:\n\n1. Try creating an index on comments (createdate), and don't forget to\nANALYZE the table afterward, or\n\n2. Modify the query to remove the probably-unnecessary GROUP BY.\n\nBut figuring out the times may be the first thing.  My guess is that\nthe 2.5 s time is a time from your logs, maybe at a time when the\nsystem was busy, and the 300 ms time was what you got it when you ran\nit some other time.  But maybe there's some other explanation.  You\nshould try to figure it out.\n\n...Robert", "msg_date": "Mon, 22 Jun 2009 23:11:06 +0200", "msg_from": "Peter Alban <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" }, { "msg_contents": "On 6/22/09 2:11 PM, \"Peter Alban\" <[email protected]> wrote:\n\n> hey folks ! \n> \n> eventually the removing of the group by did improve but still my concern is\n> why cant we take the result from memory given its same resultset .\n> But I keep pusing for the developers to move to memcached so we overcome this\n> limitation . \n> \n> cheers,\n> Peter\n> \nCaching of that sort is better suited to client code or a middle tier\ncaching technology. It isn¹t that simple to resolve that the same query\nwill return the same result for most queries. Between two executions\nanother could have made a change.\n\nBut more importantly, the client is what knows what level of Ostaleness¹ is\nappropriate for the data. A RDBMS will operate on a no tolerance policy for\nreturning stale data and with strict transactional visibility rules. If\nyour application only needs a result that is fresh within some window of\ntime, it should do the caching (or another component). This is far more\nefficient. A RDBMS is a very poorly performing data cache < its built to\nquickly resolve arbitrary relational queries with strict transactional\nguarantees, not to cache a set of answers. Although given a hammer\neverything looks like a nail . . .\n\n\n> On Mon, Jun 22, 2009 at 5:23 AM, Robert Haas <[email protected]> wrote:\n>> On Sun, Jun 21, 2009 at 4:59 PM, Peter Alban<[email protected]> wrote:\n>>> \n>>> \n>>> On Sun, Jun 21, 2009 at 10:01 PM, Justin Graf <[email protected]>\n>>> wrote:\n>>>> \n>>>> Peter Alban wrote:\n>>>> \n>>>> duration: 2533.734 ms  statement:\n>>>> \n>>>> SELECT news.url_text,news.title, comments.name <http://comments.name> ,\n>>>> comments.createdate,\n>>>> comments.user_id, comments.comment FROM news, comments WHERE\n>>>> comments.cid=news.id <http://news.id>   AND comments.published='1' GROUP BY\n>>>> news.url_text,news.title comments.name <http://comments.name> ,\n>>>> comments.createdate,\n>>>> comments.user_id, comments.comment ORDER BY comments.createdate DESC LIMIT\n>>>> 3\n>>>> \n>>>> \n>>>> And here is the query plan :\n>>>>                                                               QUERY\n>>>> PLAN\n>>>> \n>>>> ---------------------------------------------------------------------------\n>>>> -------------------------------------------------------------\n>>>>  Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\n>>>> time=288.525..288.528 rows=3 loops=1)\n>>>>    ->  Sort  (cost=4313.54..4347.26 rows=13486 width=595) (actual\n>>>> time=288.523..288.523 rows=3 loops=1)\n>>>>          Sort Key: comments.createdate\n>>>>          ->  HashAggregate  (cost=3253.60..3388.46 rows=13486 width=595)\n>>>> (actual time=137.521..148.132 rows=13415 loops=1)\n>>>>                ->  Hash Join  (cost=1400.73..3051.31 rows=13486 width=595)\n>>>> (actual time=14.298..51.049 rows=13578 loops=1)\n>>>>                      Hash Cond: (\"outer\".cid = \"inner\".id)\n>>>>                      ->  Seq Scan on comments  (cost=0.00..1178.72\n>>>> rows=13480 width=522) (actual time=0.012..17.434 rows=13418 loops=1)\n>>>>                            Filter: (published = 1)\n>>>>                      ->  Hash  (cost=1391.18..1391.18 rows=3818 width=81)\n>>>> (actual time=14.268..14.268 rows=3818 loops=1)\n>>>>                            ->  Seq Scan on news  (cost=0.00..1391.18\n>>>> rows=3818 width=81) (actual time=0.021..10.072 rows=3818 loops=1)\n>>>> \n>>>> The same is being requested from different sessions . So why is it not\n>>>> being cached .\n>>>> \n>>>> \n>>>> Because the query results are not cached only the RAW tables are.   The\n>>>> query is rerun every time it is requested.\n>>>> \n>>>> What is the group by clause accomplishing???\n>>>> The sorting and hash Aggregate is eating up all the time\n>>> \n>>> So this should mean that having say a 5 mb table in memory doing such query\n>>> above takes 2 secs in memory ?\n>> \n>> Nope.  But as others have pointed out, you need to figure out why it's\n>> taking 2.5 s but EXPLAIN ANALYZE is only saying 300 ms.\n>> \n>> There's other things you can do to optimize this query; for example:\n>> \n>> 1. Try creating an index on comments (createdate), and don't forget to\n>> ANALYZE the table afterward, or\n>> \n>> 2. Modify the query to remove the probably-unnecessary GROUP BY.\n>> \n>> But figuring out the times may be the first thing.  My guess is that\n>> the 2.5 s time is a time from your logs, maybe at a time when the\n>> system was busy, and the 300 ms time was what you got it when you ran\n>> it some other time.  But maybe there's some other explanation.  You\n>> should try to figure it out.\n>> \n>> ...Robert\n> \n> \n\n", "msg_date": "Mon, 22 Jun 2009 16:06:48 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: same query in high number of times" } ]
[ { "msg_contents": "---- Message from mailto:[email protected] Peter Alban [email protected] at 06-21-2009 10:59:49 PM ------ \n\nOn Sun, Jun 21, 2009 at 10:01 PM, Justin Graf mailto:[email protected] wrote:\n \n \n\n \n \nPeter Alban wrote: \n\nduration: 2533.734 ms statement: \n \n \nLimit (cost=4313.54..4313.55 rows=3 width=595) (actual time=288.525..288.528 rows=3 loops=1)\n \n \n Because the query results are not cached only the RAW tables are. The query is rerun every time it is requested. \n \nWhat is the group by clause accomplishing??? \nThe sorting and hash Aggregate is eating up all the time \n\n So this should mean that having say a 5 mb table in memory doing such query above takes 2 secs in memory ? \n \nAssuming that, we probably have really slow memory :) \n \nBesides , the query makes less sense to me , but I dont write the queries (yet) simply looking at the server side .\nSo do you suggest to tune the queries or shall I rather look for other monitoring tools ? \ncheers,\nPeter\n \n\nThats a really tiny table it should be processed in sub milliseconds something else is going on. The actual time in the explain of the query states 288 millisecond not the 2533.734 you state from above. \n\nYou have not told us the version of PG or the OS its running on. \n\nIs there anything else running on the server???\n\n\n\n\n\n\n---- Message from Peter Alban\n<[email protected]> at 06-21-2009 10:59:49 PM ------\nOn\nSun, Jun 21, 2009 at 10:01 PM, Justin Graf <[email protected]>\nwrote:\n\n\n\nPeter Alban wrote:\n duration: 2533.734 ms  statement: \n\n\n Limit  (cost=4313.54..4313.55 rows=3 width=595) (actual\ntime=288.525..288.528 rows=3 loops=1)\n\n\n\n\nBecause the query results are not cached only the RAW tables are.   The\nquery is rerun every time it is requested. \n\nWhat is the group by clause accomplishing???  \nThe sorting and hash Aggregate is eating up all the time\n\n\nSo this should mean that having say a 5 mb table in memory doing\nsuch query above takes 2 secs in memory ? \n\nAssuming that, we probably have really slow memory  :) \n\nBesides , the query makes less sense to me , but I dont write the\nqueries (yet) simply looking at the server side  .\nSo do you suggest to tune the queries or shall I rather look for other\nmonitoring tools ? \ncheers,\nPeter\n\n\n\n\n\nThats a really tiny table  it should be processed in sub milliseconds\nsomething else is going on.  The actual time in the explain of the query states 288\nmillisecond not  the 2533.734 you state from above. \n\nYou have not told us the version of PG or the OS its running on. Is there anything else running on the server???", "msg_date": "Sun, 21 Jun 2009 18:09:10 -0400", "msg_from": "\"Justin Graf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: same query in high number of times" } ]
[ { "msg_contents": "Hi all,\n\nI'm running a quite large website which has its own forums. They are\ncurrently heavily used and I'm getting performance issues. Most of them\nare due to repeated UPDATE queries on a \"flags\" table.\n\nThis \"flags\" table has more or less the following fields:\n\nUserID - TopicID - LastReadAnswerID\n\nThe flags table keeps track of every topic a member has visited and\nremembers the last answer which was posted at this moment. It allows the\nuser to come back a few days after and immediately jump to the last\nanswer he has not read.\n\nMy problem is that everytime a user READS a topic, it UPDATES this flags\ntable to remember he has read it. This leads to multiple updates at the\nsame time on the same table, and an update can take a few seconds. This\nis not acceptable for my users.\n\nQuestion: what is the general rule of thumb here? How would you store\nthis information?\n\nThanks a lot in advance.\nMathieu.\n", "msg_date": "Tue, 23 Jun 2009 13:12:39 +0200", "msg_from": "Mathieu Nebra <[email protected]>", "msg_from_op": true, "msg_subject": "How would you store read/unread topic status?" }, { "msg_contents": "On 06/23/2009 01:12 PM, Mathieu Nebra wrote:\n> I'm running a quite large website which has its own forums. They are\n> currently heavily used and I'm getting performance issues. Most of them\n> are due to repeated UPDATE queries on a \"flags\" table.\n>\n> This \"flags\" table has more or less the following fields:\n>\n> UserID - TopicID - LastReadAnswerID\n>\n> The flags table keeps track of every topic a member has visited and\n> remembers the last answer which was posted at this moment. It allows the\n> user to come back a few days after and immediately jump to the last\n> answer he has not read.\n> My problem is that everytime a user READS a topic, it UPDATES this flags\n> table to remember he has read it. This leads to multiple updates at the\n> same time on the same table, and an update can take a few seconds. This\n> is not acceptable for my users.\nHave you analyzed why it takes that long? Determining that is the first \nstep of improving the current situation...\n\nMy first guess would be, that your disks cannot keep up with the number \nof syncronous writes/second. Do you know how many transactions with \nwrite access you have? Guessing from your description you do at least \none write for every page hit on your forum.\n\nWith the default settings every transaction needs to wait for io at the \nend - to ensure transactional semantics.\nDepending on your disk the number of possible writes/second is quite low \n- a normal SATA disk with 7200rpm can satisfy something around 130 \nsyncronous writes per second. Which is the upper limit on writing \ntransactions per second.\nWhat disks do you have?\n\nOn which OS are you? If you are on linux you could use iostat to get \nsome relevant statistics like:\niostat -x /path/to/device/the/database/resides/on 2 10\n\nThat gives you 10 statistics over periods of 2 seconds.\n\n\nDepending on those results there are numerous solutions to that problem...\n\n> Question: what is the general rule of thumb here? How would you store\n> this information?\nThe problem here is, that every read access writes to disk - that is not \ngoing to scale very well.\nOne possible solution is to use something like memcached to store the \nlast read post in memory and periodically write it into the database.\n\n\nWhich pg version are you using?\n\n\nAndres\n", "msg_date": "Tue, 23 Jun 2009 14:24:40 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "On Tue, Jun 23, 2009 at 1:12 PM, Mathieu Nebra<[email protected]> wrote:\n> This \"flags\" table has more or less the following fields:\n>\n> UserID - TopicID - LastReadAnswerID\n\nWe are doing pretty much same thing.\n\n> My problem is that everytime a user READS a topic, it UPDATES this flags\n> table to remember he has read it. This leads to multiple updates at the\n> same time on the same table, and an update can take a few seconds. This\n> is not acceptable for my users.\n\nFirst of all, and I'm sure you thought of this, an update isn't needed\nevery time a user reads a topic; only when there are new answers that\nneed to be marked as read. So an \"update ... where last_read_answer_id\n< ?\" should avoid the need for an update.\n\n(That said, I believe PostgreSQL diffs tuple updates, so in practice\nPostgreSQL might not be writing anything if you run an \"update\" with\nthe same value. I will let someone more intimate with the internal\ndetails of updates to comment on this.)\n\nSecondly, an update should not take \"a few seconds\". You might want to\ninvestigate this part before you turn to further optimizations.\n\nIn our application we defer the updates to a separate asynchronous\nprocess using a simple queue mechanism, but in our case, we found that\nthe updates are fast enough (in the order of a few milliseconds) not\nto warrant batching them into single transactions.\n\nA.\n", "msg_date": "Tue, 23 Jun 2009 14:37:10 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "On 06/23/2009 02:37 PM, Alexander Staubo wrote:\n> (That said, I believe PostgreSQL diffs tuple updates, so in practice\n> PostgreSQL might not be writing anything if you run an \"update\" with\n> the same value. I will let someone more intimate with the internal\n> details of updates to comment on this.)\nNo, it does not do that by default.\nYou can write a trigger to do that though - and there is one packaged \nwith the core version in the upcoming 8.4 version.\n\nAndres\n", "msg_date": "Tue, 23 Jun 2009 14:43:03 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "Mathieu Nebra wrote:\n> Hi all,\n>\n> I'm running a quite large website which has its own forums. They are\n> currently heavily used and I'm getting performance issues. Most of them\n> are due to repeated UPDATE queries on a \"flags\" table.\n>\n> This \"flags\" table has more or less the following fields:\n>\n> UserID - TopicID - LastReadAnswerID\n>\n> The flags table keeps track of every topic a member has visited and\n> remembers the last answer which was posted at this moment. It allows the\n> user to come back a few days after and immediately jump to the last\n> answer he has not read.\n>\n> My problem is that everytime a user READS a topic, it UPDATES this flags\n> table to remember he has read it. This leads to multiple updates at the\n> same time on the same table, and an update can take a few seconds. This\n> is not acceptable for my users.\n>\n> Question: what is the general rule of thumb here? How would you store\n> this information?\n>\n> Thanks a lot in advance.\n> Mathieu.\n>\n> \nSounds like the server is getting IO bound by checkpoints causing flush \nto disk causing a IO to become bound.\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\nthere is some 8.0-8.2 tuning ideas in this link.\n\nYes this is acceptable way to store such information. \n\nWhat is the PG version. performance tuning options are different \ndepending on the version???\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n", "msg_date": "Tue, 23 Jun 2009 09:57:56 -0400", "msg_from": "justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": ">\n> In our application we defer the updates to a separate asynchronous\n> process using a simple queue mechanism, but in our case, we found that\n> the updates are fast enough (in the order of a few milliseconds) not\n> to warrant batching them into single transactions.\n>\n\nWe do a very similar trick for another sort of data and its worked wonders\nfor performance. We had more frequent updates to fewer rows, though. If\nyou happen to be using Java, HashMap and TreeMap are perfect for this\nbecause they are reentrant so you don't have to worry about synchronizing\nyour sweeper with your web page activities. As an added bonus, when you do\nthis trick you don't have to query this information from the database unless\nyou have a cache miss.\n\n\nIn our application we defer the updates to a separate asynchronous\nprocess using a simple queue mechanism, but in our case, we found that\nthe updates are fast enough (in the order of a few milliseconds) not\nto warrant batching them into single transactions.\nWe do a very similar trick for another sort of data and its worked wonders for performance.  We had more frequent updates to fewer rows, though.  If you happen to be using Java, HashMap and TreeMap are perfect for this because they are reentrant so you don't have to worry about synchronizing your sweeper with your web page activities.  As an added bonus, when you do this trick you don't have to query this information from the database unless you have a cache miss.", "msg_date": "Tue, 23 Jun 2009 10:11:33 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "On Tue, 23 Jun 2009, Nikolas Everett wrote:\n> If you happen to be using Java, HashMap and TreeMap are perfect for this \n> because they are reentrant so you don't have to worry about \n> synchronizing your sweeper with your web page activities.\n\nSee the note in http://java.sun.com/javase/6/docs/api/java/util/TreeMap.html\n\n> \"Note that this implementation is not synchronized.\"\n\nIf you have multiple threads accessing a TreeMap or HashMap, then they \nmust be synchronised to ensure that only one thread at a time is accessing \nit. Otherwise, you may suffer severe data loss and possibly even JVM \ncrashes. Perhaps you meant java.util.concurrent.ConcurrentHashMap?\n\nBe very careful.\n\nMatthew\n\n-- \n Now, you would have thought these coefficients would be integers, given that\n we're working out integer results. Using a fraction would seem really\n stupid. Well, I'm quite willing to be stupid here - in fact, I'm going to\n use complex numbers. -- Computer Science Lecturer\n", "msg_date": "Tue, 23 Jun 2009 15:20:19 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "> On 06/23/2009 01:12 PM, Mathieu Nebra wrote:\n>> >> I'm running a quite large website which has its own forums. They are\n>> >> currently heavily used and I'm getting performance issues. Most of\nthem\n>> >> are due to repeated UPDATE queries on a \"flags\" table.\n>> >>\n>> >> This \"flags\" table has more or less the following fields:\n>> >>\n>> >> UserID - TopicID - LastReadAnswerID\n>> >>\n>> >> The flags table keeps track of every topic a member has visited and\n>> >> remembers the last answer which was posted at this moment. It\nallows the\n>> >> user to come back a few days after and immediately jump to the last\n>> >> answer he has not read.\n>> >> My problem is that everytime a user READS a topic, it UPDATES this\nflags\n>> >> table to remember he has read it. This leads to multiple updates\nat the\n>> >> same time on the same table, and an update can take a few seconds.\nThis\n>> >> is not acceptable for my users.\n> > Have you analyzed why it takes that long? Determining that is the first\n> > step of improving the current situation...\n> >\n> > My first guess would be, that your disks cannot keep up with the number\n> > of syncronous writes/second. Do you know how many transactions with\n> > write access you have? Guessing from your description you do at least\n> > one write for every page hit on your forum.\n\nI don't know how many writes/s Pgsql can handle on my server, but I\nfirst suspected that it was good practice to avoid unnecessary writes.\n\nI do 1 write/page for every connected user on the forums.\nI do the same on another part of my website to increment the number of\npage views (this was not part of my initial question but it is very close).\n\n> >\n> > With the default settings every transaction needs to wait for io at the\n> > end - to ensure transactional semantics.\n> > Depending on your disk the number of possible writes/second is quite low\n> > - a normal SATA disk with 7200rpm can satisfy something around 130\n> > syncronous writes per second. Which is the upper limit on writing\n> > transactions per second.\n> > What disks do you have?\n\nWe have 2 SAS RAID 0 15000rpm disks.\n\n> >\n> > On which OS are you? If you are on linux you could use iostat to get\n> > some relevant statistics like:\n> > iostat -x /path/to/device/the/database/resides/on 2 10\n> >\n> > That gives you 10 statistics over periods of 2 seconds.\n> >\n> >\n> > Depending on those results there are numerous solutions to that\nproblem...\n\nHere it is:\n\n$ iostat -x /dev/sda 2 10\nLinux 2.6.18-6-amd64 (scratchy) \t23.06.2009\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 18,02 0,00 12,87 13,13 0,00 55,98\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0,94 328,98 29,62 103,06 736,58 6091,14 51,46\n 0,04 0,25 0,04 0,51\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 39,65 0,00 48,38 2,00 0,00 9,98\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0,00 0,00 10,00 78,00 516,00 1928,00 27,77\n 6,44 73,20 2,75 24,20\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 40,15 0,00 48,13 2,24 0,00 9,48\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0,00 0,00 6,47 100,50 585,07 2288,56 26,87\n 13,00 121,56 3,00 32,04\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 45,14 0,00 45,64 6,73 0,00 2,49\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 1,00 0,00 34,00 157,50 1232,00 3904,00 26,82\n 26,64 139,09 3,03 58,00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 46,25 0,00 49,25 3,50 0,00 1,00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0,00 0,00 27,00 173,00 884,00 4224,00 25,54\n 24,46 122,32 3,00 60,00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 44,42 0,00 47,64 2,23 0,00 5,71\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0,00 0,00 15,42 140,30 700,50 3275,62 25,53\n 17,94 115,21 2,81 43,78\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 41,75 0,00 48,50 2,50 0,00 7,25\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0,50 0,00 21,11 116,08 888,44 2472,36 24,50\n 12,62 91,99 2,55 34,97\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 44,03 0,00 46,27 2,99 0,00 6,72\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 9,00 0,00 10,00 119,00 484,00 2728,00 24,90\n 15,15 117,47 2,70 34,80\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 36,91 0,00 51,37 2,49 0,00 9,23\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0,99 0,00 14,78 136,45 390,15 2825,62 21,26\n 21,86 144,52 2,58 39,01\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 38,75 0,00 48,75 1,00 0,00 11,50\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0,00 0,00 7,54 67,34 377,89 1764,82 28,62\n 5,38 71,89 2,95 22,11\n\n\n\n> >\n>> >> Question: what is the general rule of thumb here? How would you store\n>> >> this information?\n> > The problem here is, that every read access writes to disk - that is not\n> > going to scale very well.\n\nThat's what I thought.\n\n> > One possible solution is to use something like memcached to store the\n> > last read post in memory and periodically write it into the database.\n> >\n\nWe're starting using memcached. But how would you \"periodically\" write\nthat to database?\n\n> >\n> > Which pg version are you using?\n\nI should have mentionned that before sorry: PostgreSQL 8.2\n\nThanks a lot!\n\n\n\nAndres Freund a écrit :\n> On 06/23/2009 01:12 PM, Mathieu Nebra wrote:\n>> I'm running a quite large website which has its own forums. They are\n>> currently heavily used and I'm getting performance issues. Most of them\n>> are due to repeated UPDATE queries on a \"flags\" table.\n>>\n>> This \"flags\" table has more or less the following fields:\n>>\n>> UserID - TopicID - LastReadAnswerID\n>>\n>> The flags table keeps track of every topic a member has visited and\n>> remembers the last answer which was posted at this moment. It allows the\n>> user to come back a few days after and immediately jump to the last\n>> answer he has not read.\n>> My problem is that everytime a user READS a topic, it UPDATES this flags\n>> table to remember he has read it. This leads to multiple updates at the\n>> same time on the same table, and an update can take a few seconds. This\n>> is not acceptable for my users.\n> Have you analyzed why it takes that long? Determining that is the first\n> step of improving the current situation...\n> \n> My first guess would be, that your disks cannot keep up with the number\n> of syncronous writes/second. Do you know how many transactions with\n> write access you have? Guessing from your description you do at least\n> one write for every page hit on your forum.\n> \n> With the default settings every transaction needs to wait for io at the\n> end - to ensure transactional semantics.\n> Depending on your disk the number of possible writes/second is quite low\n> - a normal SATA disk with 7200rpm can satisfy something around 130\n> syncronous writes per second. Which is the upper limit on writing\n> transactions per second.\n> What disks do you have?\n> \n> On which OS are you? If you are on linux you could use iostat to get\n> some relevant statistics like:\n> iostat -x /path/to/device/the/database/resides/on 2 10\n> \n> That gives you 10 statistics over periods of 2 seconds.\n> \n> \n> Depending on those results there are numerous solutions to that problem...\n> \n>> Question: what is the general rule of thumb here? How would you store\n>> this information?\n> The problem here is, that every read access writes to disk - that is not\n> going to scale very well.\n> One possible solution is to use something like memcached to store the\n> last read post in memory and periodically write it into the database.\n> \n> \n> Which pg version are you using?\n> \n> \n> Andres\n", "msg_date": "Tue, 23 Jun 2009 16:54:00 +0200", "msg_from": "Mathieu Nebra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "Alexander Staubo a écrit :\n> On Tue, Jun 23, 2009 at 1:12 PM, Mathieu Nebra<[email protected]> wrote:\n>> This \"flags\" table has more or less the following fields:\n>>\n>> UserID - TopicID - LastReadAnswerID\n> \n> We are doing pretty much same thing.\n> \n>> My problem is that everytime a user READS a topic, it UPDATES this flags\n>> table to remember he has read it. This leads to multiple updates at the\n>> same time on the same table, and an update can take a few seconds. This\n>> is not acceptable for my users.\n> \n> First of all, and I'm sure you thought of this, an update isn't needed\n> every time a user reads a topic; only when there are new answers that\n> need to be marked as read. So an \"update ... where last_read_answer_id\n> < ?\" should avoid the need for an update.\n\nWe don't work that way. We just \"remember\" he has read these answers and\nthen we can tell him \"there are no new messages for you to read\".\nSo we just need to write what he has read when he reads it.\n\n> \n> (That said, I believe PostgreSQL diffs tuple updates, so in practice\n> PostgreSQL might not be writing anything if you run an \"update\" with\n> the same value. I will let someone more intimate with the internal\n> details of updates to comment on this.)\n> \n> Secondly, an update should not take \"a few seconds\". You might want to\n> investigate this part before you turn to further optimizations.\n\nYes, I know there is a problem but I don't know if I am competent enough\nto tune PostgreSQL for that. It can take a while to understand the\nproblem, and I'm not sure I'll have the time for that.\n\nI am, however, opened to suggestions. Maybe I'm doing something wrong\nsomewhere.\n\n> \n> In our application we defer the updates to a separate asynchronous\n> process using a simple queue mechanism, but in our case, we found that\n> the updates are fast enough (in the order of a few milliseconds) not\n> to warrant batching them into single transactions.\n\nA few milliseconds would be cool.\nIn fact, defering to another process is a good idea, but I'm not sure if\nit is easy to implement. It would be great to have some sort of UPDATE\n... LOW PRIORITY to make the request non blocking.\n\nThanks.\n", "msg_date": "Tue, 23 Jun 2009 17:00:08 +0200", "msg_from": "Mathieu Nebra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": ">> > Which pg version are you using?\n>\n> I should have mentionned that before sorry: PostgreSQL 8.2\n\nI think there is an awful lot of speculation on this thread about what\nyour problem is without anywhere near enough investigation. A couple\nof seconds for an update is a really long time, unless your server is\nabsolutely slammed, in which case probably everything is taking a long\ntime. We need to get some more information on what is happening here.\n Approximately how many requests per second are you servicing? Also,\ncan you:\n\n1. Run EXPLAIN ANALYZE on a representative UPDATE statement and post\nthe exact query and the output.\n\n2. Run VACUUM VERBOSE on your database and send the last 10 lines or\nso of the output.\n\n3. Try your UPDATE statement at a low-traffic time of day and see\nwhether it's faster than it is at a high-traffic time of day, and by\nhow much. Or dump your database and reload it on a dev server and see\nhow fast it runs there.\n\n...Robert\n", "msg_date": "Tue, 23 Jun 2009 11:04:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "Mathieu Nebra <mateo21 'at' siteduzero.com> writes:\n\n>> (That said, I believe PostgreSQL diffs tuple updates, so in practice\n>> PostgreSQL might not be writing anything if you run an \"update\" with\n>> the same value. I will let someone more intimate with the internal\n>> details of updates to comment on this.)\n>> \n>> Secondly, an update should not take \"a few seconds\". You might want to\n>> investigate this part before you turn to further optimizations.\n>\n> Yes, I know there is a problem but I don't know if I am competent enough\n> to tune PostgreSQL for that. It can take a while to understand the\n> problem, and I'm not sure I'll have the time for that.\n\nShort story: run the query in psql prepending EXPLAIN ANALYZE in\nfront of it and copy-paste the output in reply to that list.\n\nLong story: there are a lot of interesting material in PG\nofficial documentation about optimization. It is very worth a\nread but it's longer than a short story. In my experience,\ndatabase performance can be degraded orders of magnitude if not\nconfigured properly.\n\n> I am, however, opened to suggestions. Maybe I'm doing something wrong\n> somewhere.\n>\n>> \n>> In our application we defer the updates to a separate asynchronous\n>> process using a simple queue mechanism, but in our case, we found that\n>> the updates are fast enough (in the order of a few milliseconds) not\n>> to warrant batching them into single transactions.\n>\n> A few milliseconds would be cool.\n\nThat also depends on the query. If your update selects rows not\naccording to an index you're going to be in trouble if the table\nhosts a lot of data, but that's fair. So you might just need an\nindex. That might also be related to row bloat. Your query with\nEXPLAIN ANALYZE would tell what postgres does (if it uses an\nindex or not).\n\n> In fact, defering to another process is a good idea, but I'm not sure if\n> it is easy to implement. It would be great to have some sort of UPDATE\n\nNo article on the site du z�ro explaining how to implement\nproducer-consumers? :) But that must really be thought before\nimplementing. It's not worth piling queries in memory because it\nwill create other problems if queries are produced faster than\nconsumed in the long run.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Tue, 23 Jun 2009 17:11:55 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "On 06/23/2009 04:54 PM, Mathieu Nebra wrote:\n>> On 06/23/2009 01:12 PM, Mathieu Nebra wrote:\n>>>>> I'm running a quite large website which has its own forums.\n>>>>> They are currently heavily used and I'm getting performance\n>>>>> issues. Most of\n> them\n>>>>> are due to repeated UPDATE queries on a \"flags\" table.\n>>>>>\n>>>>> This \"flags\" table has more or less the following fields:\n>>>>>\n>>>>> UserID - TopicID - LastReadAnswerID\n>>>>>\n>>>>> The flags table keeps track of every topic a member has\n>>>>> visited and remembers the last answer which was posted at\n>>>>> this moment. It allows the user to come back a few days\n>>>>> after and immediately jump to the last answer he has not\n>>>>> read. My problem is that everytime a user READS a topic, it\n>>>>> UPDATES this flags table to remember he has read it. This\n>>>>> leads to multiple updates at the same time on the same table,\n>>>>> and an update can take a few seconds. This is not acceptable\n>>>>> for my users.\n>>> Have you analyzed why it takes that long? Determining that is the\n>>> first step of improving the current situation...\n>>>\n>>> My first guess would be, that your disks cannot keep up with the\n>>> number of syncronous writes/second. Do you know how many\n>>> transactions with write access you have? Guessing from your\n>>> description you do at least one write for every page hit on your\n>>> forum.\n>\n> I don't know how many writes/s Pgsql can handle on my server, but I\n> first suspected that it was good practice to avoid unnecessary\n> writes.\nIt surely is.\n\n> I do 1 write/page for every connected user on the forums. I do the\n> same on another part of my website to increment the number of page\n> views (this was not part of my initial question but it is very\n> close).\nThat even more cries for some in-memory-caching.\n\n>>> On which OS are you? If you are on linux you could use iostat to\n>>> get some relevant statistics like: iostat -x\n>>> /path/to/device/the/database/resides/on 2 10\n>>>\n>>> That gives you 10 statistics over periods of 2 seconds.\n>>>\n>>>\n>>> Depending on those results there are numerous solutions to that\n> problem...\n>\n> Here it is:\n>\n> $ iostat -x /dev/sda 2 10 Linux 2.6.18-6-amd64 (scratchy) 23.06.2009\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 18,02 0,00\n> 12,87 13,13 0,00 55,98\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 0,94\n> 328,98 29,62 103,06 736,58 6091,14 51,46 0,04 0,25 0,04\n> 0,51\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 39,65 0,00\n> 48,38 2,00 0,00 9,98\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 0,00 0,00\n> 10,00 78,00 516,00 1928,00 27,77 6,44 73,20 2,75 24,20\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 40,15 0,00\n> 48,13 2,24 0,00 9,48\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 0,00 0,00\n> 6,47 100,50 585,07 2288,56 26,87 13,00 121,56 3,00 32,04\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 45,14 0,00\n> 45,64 6,73 0,00 2,49\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 1,00 0,00\n> 34,00 157,50 1232,00 3904,00 26,82 26,64 139,09 3,03 58,00\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 46,25 0,00\n> 49,25 3,50 0,00 1,00\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 0,00 0,00\n> 27,00 173,00 884,00 4224,00 25,54 24,46 122,32 3,00 60,00\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 44,42 0,00\n> 47,64 2,23 0,00 5,71\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 0,00 0,00\n> 15,42 140,30 700,50 3275,62 25,53 17,94 115,21 2,81 43,78\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 41,75 0,00\n> 48,50 2,50 0,00 7,25\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 0,50 0,00\n> 21,11 116,08 888,44 2472,36 24,50 12,62 91,99 2,55 34,97\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 44,03 0,00\n> 46,27 2,99 0,00 6,72\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 9,00 0,00\n> 10,00 119,00 484,00 2728,00 24,90 15,15 117,47 2,70 34,80\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 36,91 0,00\n> 51,37 2,49 0,00 9,23\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 0,99 0,00\n> 14,78 136,45 390,15 2825,62 21,26 21,86 144,52 2,58 39,01\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle 38,75 0,00\n> 48,75 1,00 0,00 11,50\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n> avgrq-sz avgqu-sz await svctm %util sda 0,00 0,00\n> 7,54 67,34 377,89 1764,82 28,62 5,38 71,89 2,95 22,11\nYou see that your average wait time 'await' is quite high. That\nindicates some contention. You have somewhere between 50-200\nwrites/second, so you may be maxing out your disk (depending on your\nconfig those writes may mainly go to one disk at a time).\n\n\n>>> One possible solution is to use something like memcached to store\n>>> the last read post in memory and periodically write it into the\n>>> database.\n> We're starting using memcached. But how would you \"periodically\"\n> write that to database?\nWhere do you see the problem?\n\n>>> Which pg version are you using?\n> I should have mentionned that before sorry: PostgreSQL 8.2\nI definitely would consider upgrading to 8.3 - even without any config\nchanges it might bring quite some improvement.\n\nBut mainly it would allow you to use \"asynchronous commit\" - which could\npossibly increase your throughput tremendously.\nIt has the drawback that you possibly loose async transactions in case\nof crash - but that doesn't sound too bad for your use case (use it only\nin the transactions where it makes sense).\n\n\nBut all of that does not explain the issue sufficiently - you should not \nget that slow updates.\nI would suggest you configure \"log_min_statement_duration\" to get the \nslower queries.\nYou then should run those slow statements using 'EXPLAIN ANALYZE' to see \nwhere the time is spent.\n\nHow are you vacuuming?\n\n\nAndres\n", "msg_date": "Tue, 23 Jun 2009 17:14:07 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": ">>>> Which pg version are you using?\n>>\n>> I should have mentionned that before sorry: PostgreSQL 8.2\n>\n> I definitely would consider upgrading to 8.3 - even without any config\n> changes it might bring quite some improvement.\n>\n> But mainly it would allow you to use \"asynchronous commit\" - which could\n> possibly increase your throughput tremendously.\n\nHOT can potentitally help a lot for this workload, too, if the columns\nbeing updated are not indexed.\n\n...Robert\n", "msg_date": "Tue, 23 Jun 2009 11:30:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "not better just to store last time user visited the topic ? or forum in\ngeneral, and compare that ?\n\nnot better just to store last time user visited the topic ? or forum in general, and compare that ?", "msg_date": "Tue, 23 Jun 2009 16:34:30 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "Robert Haas a �crit :\n>>>> Which pg version are you using?\n>> I should have mentionned that before sorry: PostgreSQL 8.2\n> \n> I think there is an awful lot of speculation on this thread about what\n> your problem is without anywhere near enough investigation. A couple\n> of seconds for an update is a really long time, unless your server is\n> absolutely slammed, in which case probably everything is taking a long\n> time. We need to get some more information on what is happening here.\n\nYou're right, I'll give you the information you need.\n\n> Approximately how many requests per second are you servicing? Also,\n\nHow can I extract this information from the database? I know how to use\npg_stat_user_tables. My table has:\n\nseq_tup_read\n133793491714\n\nidx_scan\n12408612540\n\nidx_tup_fetch\n41041660903\n\nn_tup_ins\n14700038\n\nn_tup_upd\n6698236\n\nn_tup_del\n15990670\n\n> can you:\n> \n> 1. Run EXPLAIN ANALYZE on a representative UPDATE statement and post\n> the exact query and the output.\n\n\"Index Scan using prj_frm_flg_pkey on prj_frm_flg (cost=0.00..8.58\nrows=1 width=18)\"\n\" Index Cond: ((flg_mid = 3) AND (flg_sid = 123764))\"\n\nThis time it only took 54ms, but maybe it's already a lot.\n\n\n> \n> 2. Run VACUUM VERBOSE on your database and send the last 10 lines or\n> so of the output.\n\nIt's not very long, I can give you the whole log:\n\nINFO: vacuuming \"public.prj_frm_flg\"INFO: scanned index\n\"prj_frm_flg_pkey\" to remove 74091 row versions\nDETAIL: CPU 0.15s/0.47u sec elapsed 53.10 sec.INFO: scanned index\n\"flg_fav\" to remove 74091 row versions\nDETAIL: CPU 0.28s/0.31u sec elapsed 91.82 sec.INFO: scanned index\n\"flg_notif\" to remove 74091 row versions\nDETAIL: CPU 0.36s/0.37u sec elapsed 80.75 sec.INFO: scanned index\n\"flg_post\" to remove 74091 row versions\nDETAIL: CPU 0.31s/0.37u sec elapsed 115.86 sec.INFO: scanned index\n\"flg_no_inter\" to remove 74091 row versions\nDETAIL: CPU 0.34s/0.33u sec elapsed 68.96 sec.INFO: \"prj_frm_flg\":\nremoved 74091 row versions in 5979 pages\nDETAIL: CPU 0.29s/0.34u sec elapsed 100.37 sec.INFO: index\n\"prj_frm_flg_pkey\" now contains 1315895 row versions in 7716 pages\nDETAIL: 63153 index row versions were removed.\n672 index pages have been deleted, 639 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_fav\" now contains\n1315895 row versions in 18228 pages\nDETAIL: 73628 index row versions were removed.\n21 index pages have been deleted, 16 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_notif\" now\ncontains 1315895 row versions in 18179 pages\nDETAIL: 73468 index row versions were removed.\n22 index pages have been deleted, 13 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_post\" now\ncontains 1315895 row versions in 18194 pages\nDETAIL: 73628 index row versions were removed.\n30 index pages have been deleted, 23 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_no_inter\" now\ncontains 1315895 row versions in 8596 pages\nDETAIL: 73628 index row versions were removed.\n13 index pages have been deleted, 8 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"prj_frm_flg\": found 74091\nremovable, 1315895 nonremovable row versions in 10485 pages\nDETAIL: 326 dead row versions cannot be removed yet.\nThere were 253639 unused item pointers.\n10431 pages contain useful free space.\n0 pages are entirely empty.\nCPU 1.91s/2.28u sec elapsed 542.75 sec.\n\nTotal: 542877 ms.\n\n> \n> 3. Try your UPDATE statement at a low-traffic time of day and see\n> whether it's faster than it is at a high-traffic time of day, and by\n> how much. Or dump your database and reload it on a dev server and see\n> how fast it runs there.\n\nIt took 4ms.\n", "msg_date": "Tue, 23 Jun 2009 17:50:50 +0200", "msg_from": "Mathieu Nebra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "On Tue, Jun 23, 2009 at 11:50 AM, Mathieu Nebra<[email protected]> wrote:\n>>  Approximately how many requests per second are you servicing?  Also,\n>\n> How can I extract this information from the database? I know how to use\n> pg_stat_user_tables. My table has:\n\nI was thinking you might look at your httpd logs. Not sure how to get\nit otherwise.\n\n>> can you:\n>>\n>> 1. Run EXPLAIN ANALYZE on a representative UPDATE statement and post\n>> the exact query and the output.\n>\n> \"Index Scan using prj_frm_flg_pkey on prj_frm_flg  (cost=0.00..8.58\n> rows=1 width=18)\"\n> \"  Index Cond: ((flg_mid = 3) AND (flg_sid = 123764))\"\n>\n> This time it only took 54ms, but maybe it's already a lot.\n\nThat looks like EXPLAIN, not EXPLAIN ANALYZE. And can we also have the query?\n\n>> 2. Run VACUUM VERBOSE on your database and send the last 10 lines or\n>> so of the output.\n>\n> It's not very long, I can give you the whole log:\n>\n> INFO:  vacuuming \"public.prj_frm_flg\"INFO:  scanned index\n> \"prj_frm_flg_pkey\" to remove 74091 row versions\n> DETAIL:  CPU 0.15s/0.47u sec elapsed 53.10 sec.INFO:  scanned index\n> \"flg_fav\" to remove 74091 row versions\n> DETAIL:  CPU 0.28s/0.31u sec elapsed 91.82 sec.INFO:  scanned index\n> \"flg_notif\" to remove 74091 row versions\n> DETAIL:  CPU 0.36s/0.37u sec elapsed 80.75 sec.INFO:  scanned index\n> \"flg_post\" to remove 74091 row versions\n> DETAIL:  CPU 0.31s/0.37u sec elapsed 115.86 sec.INFO:  scanned index\n> \"flg_no_inter\" to remove 74091 row versions\n> DETAIL:  CPU 0.34s/0.33u sec elapsed 68.96 sec.INFO:  \"prj_frm_flg\":\n> removed 74091 row versions in 5979 pages\n> DETAIL:  CPU 0.29s/0.34u sec elapsed 100.37 sec.INFO:  index\n> \"prj_frm_flg_pkey\" now contains 1315895 row versions in 7716 pages\n> DETAIL:  63153 index row versions were removed.\n> 672 index pages have been deleted, 639 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"flg_fav\" now contains\n> 1315895 row versions in 18228 pages\n> DETAIL:  73628 index row versions were removed.\n> 21 index pages have been deleted, 16 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"flg_notif\" now\n> contains 1315895 row versions in 18179 pages\n> DETAIL:  73468 index row versions were removed.\n> 22 index pages have been deleted, 13 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"flg_post\" now\n> contains 1315895 row versions in 18194 pages\n> DETAIL:  73628 index row versions were removed.\n> 30 index pages have been deleted, 23 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"flg_no_inter\" now\n> contains 1315895 row versions in 8596 pages\n> DETAIL:  73628 index row versions were removed.\n> 13 index pages have been deleted, 8 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  \"prj_frm_flg\": found 74091\n> removable, 1315895 nonremovable row versions in 10485 pages\n> DETAIL:  326 dead row versions cannot be removed yet.\n> There were 253639 unused item pointers.\n> 10431 pages contain useful free space.\n> 0 pages are entirely empty.\n> CPU 1.91s/2.28u sec elapsed 542.75 sec.\n>\n> Total: 542877 ms.\n\nIs that just for the one table? I meant a database-wide VACUUM\nVERBOSE, so you can see if you've blown out your free-space map.\n\n>> 3. Try your UPDATE statement at a low-traffic time of day and see\n>> whether it's faster than it is at a high-traffic time of day, and by\n>> how much.  Or dump your database and reload it on a dev server and see\n>> how fast it runs there.\n>\n> It took 4ms.\n\nWas that at a low traffic time of day, or on a different server?\n\n...Robert\n", "msg_date": "Tue, 23 Jun 2009 12:06:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "So your update doesn't take long to run during off-peak times, so\nbasically your options are:\n\n1. Optimize your postgresql.conf settings or upgrade to the latest\nversion of PostgreSQL.\n\n2. Redesign your forum code so it can scale better.\n\n3. Upgrade your servers hardware as it may be overloaded.\n\nI would probably attack those in the order I described. \n\nAs far as redesigning your forum code, keep in mind that in PostgreSQL\nan update is basically a select, delete, insert in a single statement.\nFirst it needs to find the rows to update, it marks the rows for\ndeletion (which vacuum later does) and inserts a new row. So updates\ncan be quite expensive. \n\nIn SOME situations, it can be faster to do inserts only, and modify\nyour select query to get just the data you need, for example:\n\nRather then an update like this:\n\nupdate <table> set LastReadAnswerID = <value> where UserID = <value>\nAND TopicID = <value>\n\nYou could do this instead:\n\ninsert into <table> VALUES(<user_id>,<topic_id>,<last_read_answer_id>)\n\nThen just modify your select statement slightly to get the last\ninserted row:\n\nselect * from <table> where user_id = <value> AND topic_id = <value>\norder by LastReadAnswerID DESC LIMIT 1\n\nThis makes your select statement slightly more expensive but your\ninsert statement pretty much as cheap as possible. Since its much\neasier to cache select results you could easily wrap some caching\nmechanism around your select query to reduce the load there too. \n\nThen using a task scheduler like cron simply clear out old rows from the\ntable you insert into every minute, 5 minutes, hour, day, whatever makes\nmost sense to keep the select queries fast.\n\nA memcached solution would probably be much better, but its also likely\nmuch more involved to do.\n\n\nOn Tue, 23 Jun 2009 17:50:50 +0200\nMathieu Nebra <[email protected]> wrote:\n\n> Robert Haas a écrit :\n> >>>> Which pg version are you using?\n> >> I should have mentionned that before sorry: PostgreSQL 8.2\n> > \n> > I think there is an awful lot of speculation on this thread about\n> > what your problem is without anywhere near enough investigation. A\n> > couple of seconds for an update is a really long time, unless your\n> > server is absolutely slammed, in which case probably everything is\n> > taking a long time. We need to get some more information on what\n> > is happening here.\n> \n> You're right, I'll give you the information you need.\n> \n> > Approximately how many requests per second are you servicing?\n> > Also,\n> \n> How can I extract this information from the database? I know how to\n> use pg_stat_user_tables. My table has:\n> \n> seq_tup_read\n> 133793491714\n> \n> idx_scan\n> 12408612540\n> \n> idx_tup_fetch\n> 41041660903\n> \n> n_tup_ins\n> 14700038\n> \n> n_tup_upd\n> 6698236\n> \n> n_tup_del\n> 15990670\n> \n> > can you:\n> > \n> > 1. Run EXPLAIN ANALYZE on a representative UPDATE statement and post\n> > the exact query and the output.\n> \n> \"Index Scan using prj_frm_flg_pkey on prj_frm_flg (cost=0.00..8.58\n> rows=1 width=18)\"\n> \" Index Cond: ((flg_mid = 3) AND (flg_sid = 123764))\"\n> \n> This time it only took 54ms, but maybe it's already a lot.\n> \n> \n> > \n> > 2. Run VACUUM VERBOSE on your database and send the last 10 lines or\n> > so of the output.\n> \n> It's not very long, I can give you the whole log:\n> \n> INFO: vacuuming \"public.prj_frm_flg\"INFO: scanned index\n> \"prj_frm_flg_pkey\" to remove 74091 row versions\n> DETAIL: CPU 0.15s/0.47u sec elapsed 53.10 sec.INFO: scanned index\n> \"flg_fav\" to remove 74091 row versions\n> DETAIL: CPU 0.28s/0.31u sec elapsed 91.82 sec.INFO: scanned index\n> \"flg_notif\" to remove 74091 row versions\n> DETAIL: CPU 0.36s/0.37u sec elapsed 80.75 sec.INFO: scanned index\n> \"flg_post\" to remove 74091 row versions\n> DETAIL: CPU 0.31s/0.37u sec elapsed 115.86 sec.INFO: scanned index\n> \"flg_no_inter\" to remove 74091 row versions\n> DETAIL: CPU 0.34s/0.33u sec elapsed 68.96 sec.INFO: \"prj_frm_flg\":\n> removed 74091 row versions in 5979 pages\n> DETAIL: CPU 0.29s/0.34u sec elapsed 100.37 sec.INFO: index\n> \"prj_frm_flg_pkey\" now contains 1315895 row versions in 7716 pages\n> DETAIL: 63153 index row versions were removed.\n> 672 index pages have been deleted, 639 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_fav\" now\n> contains 1315895 row versions in 18228 pages\n> DETAIL: 73628 index row versions were removed.\n> 21 index pages have been deleted, 16 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_notif\" now\n> contains 1315895 row versions in 18179 pages\n> DETAIL: 73468 index row versions were removed.\n> 22 index pages have been deleted, 13 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_post\" now\n> contains 1315895 row versions in 18194 pages\n> DETAIL: 73628 index row versions were removed.\n> 30 index pages have been deleted, 23 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_no_inter\" now\n> contains 1315895 row versions in 8596 pages\n> DETAIL: 73628 index row versions were removed.\n> 13 index pages have been deleted, 8 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"prj_frm_flg\": found 74091\n> removable, 1315895 nonremovable row versions in 10485 pages\n> DETAIL: 326 dead row versions cannot be removed yet.\n> There were 253639 unused item pointers.\n> 10431 pages contain useful free space.\n> 0 pages are entirely empty.\n> CPU 1.91s/2.28u sec elapsed 542.75 sec.\n> \n> Total: 542877 ms.\n> \n> > \n> > 3. Try your UPDATE statement at a low-traffic time of day and see\n> > whether it's faster than it is at a high-traffic time of day, and by\n> > how much. Or dump your database and reload it on a dev server and\n> > see how fast it runs there.\n> \n> It took 4ms.\n> \n", "msg_date": "Tue, 23 Jun 2009 09:23:22 -0700", "msg_from": "Mike <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "So your update doesn't take long to run during off-peak times, so\nbasically your options are:\n\n1. Optimize your postgresql.conf settings or upgrade to the latest\nversion of PostgreSQL.\n\n2. Redesign your forum code so it can scale better.\n\n3. Upgrade your servers hardware as it may be overloaded.\n\nI would probably attack those in the order I described. \n\nAs far as redesigning your forum code, keep in mind that in PostgreSQL\nan update is basically a select, delete, insert in a single statement.\nFirst it needs to find the rows to update, it marks the rows for\ndeletion (which vacuum later does) and inserts a new row. So updates\ncan be quite expensive. \n\nIn SOME situations, it can be faster to do inserts only, and modify\nyour select query to get just the data you need, for example:\n\nRather then an update like this:\n\nupdate <table> set LastReadAnswerID = <value> where UserID = <value>\nAND TopicID = <value>\n\nYou could do this instead:\n\ninsert into <table> VALUES(<user_id>,<topic_id>,<last_read_answer_id>)\n\nThen just modify your select statement slightly to get the last\ninserted row:\n\nselect * from <table> where user_id = <value> AND topic_id = <value>\norder by LastReadAnswerID DESC LIMIT 1\n\nThis makes your select statement slightly more expensive but your\ninsert statement pretty much as cheap as possible. Since its much\neasier to cache select results you could easily wrap some caching\nmechanism around your select query to reduce the load there too. \n\nThen using a task scheduler like cron simply clear out old rows from the\ntable you insert into every minute, 5 minutes, hour, day, whatever makes\nmost sense to keep the select queries fast.\n\nA memcached solution would probably be much better, but its also likely\nmuch more involved to do.\n\n\n\nOn Tue, 23 Jun 2009 17:50:50 +0200\nMathieu Nebra <[email protected]> wrote:\n\n> Robert Haas a écrit :\n> >>>> Which pg version are you using?\n> >> I should have mentionned that before sorry: PostgreSQL 8.2\n> > \n> > I think there is an awful lot of speculation on this thread about\n> > what your problem is without anywhere near enough investigation. A\n> > couple of seconds for an update is a really long time, unless your\n> > server is absolutely slammed, in which case probably everything is\n> > taking a long time. We need to get some more information on what\n> > is happening here.\n> \n> You're right, I'll give you the information you need.\n> \n> > Approximately how many requests per second are you servicing?\n> > Also,\n> \n> How can I extract this information from the database? I know how to\n> use pg_stat_user_tables. My table has:\n> \n> seq_tup_read\n> 133793491714\n> \n> idx_scan\n> 12408612540\n> \n> idx_tup_fetch\n> 41041660903\n> \n> n_tup_ins\n> 14700038\n> \n> n_tup_upd\n> 6698236\n> \n> n_tup_del\n> 15990670\n> \n> > can you:\n> > \n> > 1. Run EXPLAIN ANALYZE on a representative UPDATE statement and post\n> > the exact query and the output.\n> \n> \"Index Scan using prj_frm_flg_pkey on prj_frm_flg (cost=0.00..8.58\n> rows=1 width=18)\"\n> \" Index Cond: ((flg_mid = 3) AND (flg_sid = 123764))\"\n> \n> This time it only took 54ms, but maybe it's already a lot.\n> \n> \n> > \n> > 2. Run VACUUM VERBOSE on your database and send the last 10 lines or\n> > so of the output.\n> \n> It's not very long, I can give you the whole log:\n> \n> INFO: vacuuming \"public.prj_frm_flg\"INFO: scanned index\n> \"prj_frm_flg_pkey\" to remove 74091 row versions\n> DETAIL: CPU 0.15s/0.47u sec elapsed 53.10 sec.INFO: scanned index\n> \"flg_fav\" to remove 74091 row versions\n> DETAIL: CPU 0.28s/0.31u sec elapsed 91.82 sec.INFO: scanned index\n> \"flg_notif\" to remove 74091 row versions\n> DETAIL: CPU 0.36s/0.37u sec elapsed 80.75 sec.INFO: scanned index\n> \"flg_post\" to remove 74091 row versions\n> DETAIL: CPU 0.31s/0.37u sec elapsed 115.86 sec.INFO: scanned index\n> \"flg_no_inter\" to remove 74091 row versions\n> DETAIL: CPU 0.34s/0.33u sec elapsed 68.96 sec.INFO: \"prj_frm_flg\":\n> removed 74091 row versions in 5979 pages\n> DETAIL: CPU 0.29s/0.34u sec elapsed 100.37 sec.INFO: index\n> \"prj_frm_flg_pkey\" now contains 1315895 row versions in 7716 pages\n> DETAIL: 63153 index row versions were removed.\n> 672 index pages have been deleted, 639 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_fav\" now\n> contains 1315895 row versions in 18228 pages\n> DETAIL: 73628 index row versions were removed.\n> 21 index pages have been deleted, 16 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_notif\" now\n> contains 1315895 row versions in 18179 pages\n> DETAIL: 73468 index row versions were removed.\n> 22 index pages have been deleted, 13 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_post\" now\n> contains 1315895 row versions in 18194 pages\n> DETAIL: 73628 index row versions were removed.\n> 30 index pages have been deleted, 23 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"flg_no_inter\" now\n> contains 1315895 row versions in 8596 pages\n> DETAIL: 73628 index row versions were removed.\n> 13 index pages have been deleted, 8 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"prj_frm_flg\": found 74091\n> removable, 1315895 nonremovable row versions in 10485 pages\n> DETAIL: 326 dead row versions cannot be removed yet.\n> There were 253639 unused item pointers.\n> 10431 pages contain useful free space.\n> 0 pages are entirely empty.\n> CPU 1.91s/2.28u sec elapsed 542.75 sec.\n> \n> Total: 542877 ms.\n> \n> > \n> > 3. Try your UPDATE statement at a low-traffic time of day and see\n> > whether it's faster than it is at a high-traffic time of day, and by\n> > how much. Or dump your database and reload it on a dev server and\n> > see how fast it runs there.\n> \n> It took 4ms.\n> \n", "msg_date": "Tue, 23 Jun 2009 09:25:15 -0700", "msg_from": "Mike <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "You're holding this behavior to far too strict of a transactional guarantee.\n\nThe client software can cache a set of recent views, and sent updates in\nbulk every 1 or 2 seconds. Worst case, if your client crashes you lose a\nsecond worth of user metadata updates on last accessed and view counts.\nThis isn't a financial transaction, don't build the app like one.\n\nThe same facility can serve as a read cache for other bits that don't need\nto be 'perfect' in the transactional sense -- counts on the number of views\n/ posts of a topic, etc. Using the db to store and retrieve such counts\nsynchronously is frankly, a bad application design.\n\n\nThe tricky part with the above is two fold: you need to have client\nsoftware capable of a thread-safe shared cache, and the clients will have to\nhave sticky-session if you are load balancing. Corner cases such as a\nserver going down and a user switching servers will need to be worked out.\n\n\nOn 6/23/09 4:12 AM, \"Mathieu Nebra\" <[email protected]> wrote:\n\n> Hi all,\n> \n> I'm running a quite large website which has its own forums. They are\n> currently heavily used and I'm getting performance issues. Most of them\n> are due to repeated UPDATE queries on a \"flags\" table.\n> \n> This \"flags\" table has more or less the following fields:\n> \n> UserID - TopicID - LastReadAnswerID\n> \n> The flags table keeps track of every topic a member has visited and\n> remembers the last answer which was posted at this moment. It allows the\n> user to come back a few days after and immediately jump to the last\n> answer he has not read.\n> \n> My problem is that everytime a user READS a topic, it UPDATES this flags\n> table to remember he has read it. This leads to multiple updates at the\n> same time on the same table, and an update can take a few seconds. This\n> is not acceptable for my users.\n> \n> Question: what is the general rule of thumb here? How would you store\n> this information?\n> \n> Thanks a lot in advance.\n> Mathieu.\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 23 Jun 2009 10:05:22 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "On 6/23/09 7:54 AM, \"Mathieu Nebra\" <[email protected]> wrote:\n\n>> On 06/23/2009 01:12 PM, Mathieu Nebra wrote:\n>>>>> I'm running a quite large website which has its own forums. They are\n>>>>> currently heavily used and I'm getting performance issues. Most of\n> them\n>>>>> are due to repeated UPDATE queries on a \"flags\" table.\n>>>>> \n>>>>> This \"flags\" table has more or less the following fields:\n>>>>> \n>>>>> UserID - TopicID - LastReadAnswerID\n>>>>> \n>>>>> The flags table keeps track of every topic a member has visited and\n>>>>> remembers the last answer which was posted at this moment. It\n> allows the\n>>>>> user to come back a few days after and immediately jump to the last\n>>>>> answer he has not read.\n>>>>> My problem is that everytime a user READS a topic, it UPDATES this\n> flags\n>>>>> table to remember he has read it. This leads to multiple updates\n> at the\n>>>>> same time on the same table, and an update can take a few seconds.\n> This\n>>>>> is not acceptable for my users.\n>>> Have you analyzed why it takes that long? Determining that is the first\n>>> step of improving the current situation...\n>>> \n>>> My first guess would be, that your disks cannot keep up with the number\n>>> of syncronous writes/second. Do you know how many transactions with\n>>> write access you have? Guessing from your description you do at least\n>>> one write for every page hit on your forum.\n> \n> I don't know how many writes/s Pgsql can handle on my server, but I\n> first suspected that it was good practice to avoid unnecessary writes.\n> \n> I do 1 write/page for every connected user on the forums.\n> I do the same on another part of my website to increment the number of\n> page views (this was not part of my initial question but it is very close).\n> \n>>> \n>>> With the default settings every transaction needs to wait for io at the\n>>> end - to ensure transactional semantics.\n>>> Depending on your disk the number of possible writes/second is quite low\n>>> - a normal SATA disk with 7200rpm can satisfy something around 130\n>>> syncronous writes per second. Which is the upper limit on writing\n>>> transactions per second.\n>>> What disks do you have?\n> \n> We have 2 SAS RAID 0 15000rpm disks.\n> \n>>> \n>>> On which OS are you? If you are on linux you could use iostat to get\n>>> some relevant statistics like:\n>>> iostat -x /path/to/device/the/database/resides/on 2 10\n>>> \n>>> That gives you 10 statistics over periods of 2 seconds.\n>>> \n>>> \n>>> Depending on those results there are numerous solutions to that\n> problem...\n> \n> Here it is:\n> \n> $ iostat -x /dev/sda 2 10\n> Linux 2.6.18-6-amd64 (scratchy) 23.06.2009\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 18,02 0,00 12,87 13,13 0,00 55,98\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0,94 328,98 29,62 103,06 736,58 6091,14 51,46\n> 0,04 0,25 0,04 0,51\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 39,65 0,00 48,38 2,00 0,00 9,98\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0,00 0,00 10,00 78,00 516,00 1928,00 27,77\n> 6,44 73,20 2,75 24,20\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 40,15 0,00 48,13 2,24 0,00 9,48\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0,00 0,00 6,47 100,50 585,07 2288,56 26,87\n> 13,00 121,56 3,00 32,04\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 45,14 0,00 45,64 6,73 0,00 2,49\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 1,00 0,00 34,00 157,50 1232,00 3904,00 26,82\n> 26,64 139,09 3,03 58,00\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 46,25 0,00 49,25 3,50 0,00 1,00\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0,00 0,00 27,00 173,00 884,00 4224,00 25,54\n> 24,46 122,32 3,00 60,00\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 44,42 0,00 47,64 2,23 0,00 5,71\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0,00 0,00 15,42 140,30 700,50 3275,62 25,53\n> 17,94 115,21 2,81 43,78\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 41,75 0,00 48,50 2,50 0,00 7,25\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0,50 0,00 21,11 116,08 888,44 2472,36 24,50\n> 12,62 91,99 2,55 34,97\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 44,03 0,00 46,27 2,99 0,00 6,72\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 9,00 0,00 10,00 119,00 484,00 2728,00 24,90\n> 15,15 117,47 2,70 34,80\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 36,91 0,00 51,37 2,49 0,00 9,23\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0,99 0,00 14,78 136,45 390,15 2825,62 21,26\n> 21,86 144,52 2,58 39,01\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 38,75 0,00 48,75 1,00 0,00 11,50\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0,00 0,00 7,54 67,34 377,89 1764,82 28,62\n> 5,38 71,89 2,95 22,11\n> \n\n\nI see a lot of io wait time there. My guess is that your DB is flooded with\nsynchronous writes.\n\nIF you want to optimize the hardware for this you have a couple options.\nI'm assuming your RAID 0 is not hardware RAID.\n\n1. Use 8.3+ and asynchronous commit (set synchronous_commit=false). This\nis safe data wise, but if your DB crashes you might lose the last second of\ntransactions or so that the app thought were comitted. For a DB forum, this\nis probably very acceptable. Performance should significantly gain as the\nwrites/sec will go down a lot.\n\n2. put your data on one partition and your WAL log on another.\n\n3. Get a battery backed hardware raid with write-back caching.\n\n4. If you are using ext3 on linux, make sure you mount with data=writeback\non the file system that your wal logs are on. data=ordered will cause the\nWHOLE file sytem to be flushed for each fsync, not just the tiny bit of WAL\nlog. \n\nIn short, if you combined 1,2, and 4, you'll probably have significantly\nmore capacity on the same server. So make sure your WAL log is in a\ndifferent file system from your OS and data, mount it optimally, and\nconsider turning synchronous_commit off.\n\nIf you're using RAID 0, I doubt the data is so precious that\nsynchronous_commit being true is important at all.\n\n\n> \n> \n>>> \n>>>>> Question: what is the general rule of thumb here? How would you store\n>>>>> this information?\n>>> The problem here is, that every read access writes to disk - that is not\n>>> going to scale very well.\n> \n> That's what I thought.\n> \n>>> One possible solution is to use something like memcached to store the\n>>> last read post in memory and periodically write it into the database.\n>>> \n> \n> We're starting using memcached. But how would you \"periodically\" write\n> that to database?\n> \n>>> \n>>> Which pg version are you using?\n> \n> I should have mentionned that before sorry: PostgreSQL 8.2\n> \n> Thanks a lot!\n> \n> \n> \n> Andres Freund a écrit :\n\n", "msg_date": "Tue, 23 Jun 2009 10:17:36 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "All the other comments are accurate, though it does seem like\nsomething the database ought to be able to handle.\n\nThe other thing which hasn't been mentioned is that you have a lot of\nindexes. Updates require maintaining all those indexes. Are all of\nthese indexes really necessary? Do you have routine queries which look\nup users based on their flags? Or all all your oltp transactions for\nspecific userids in which case you probably just need the index on\nuserid.\n\nYou'll probably find 8.3 helps this workload more than any tuning you\ncan do in the database though. Especially if you can reduce the number\nof indexes and avoid an index on any flags that are being updated.\n", "msg_date": "Tue, 23 Jun 2009 20:44:31 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "Greg Stark a �crit :\n> All the other comments are accurate, though it does seem like\n> something the database ought to be able to handle.\n> \n> The other thing which hasn't been mentioned is that you have a lot of\n> indexes. Updates require maintaining all those indexes. Are all of\n> these indexes really necessary? Do you have routine queries which look\n> up users based on their flags? Or all all your oltp transactions for\n> specific userids in which case you probably just need the index on\n> userid.\n\n\nWe are using these indexes, but I can't be sure if we _really_ need them\nor not.\n\nI can go into detail. We have:\n\nUserID - TopicID - LastReadAnswerID - WrittenStatus - IsFavorite\n\nSo basically, we toggle the boolean flag WrittenStatus when the user has\nwritten in that topic. The same goes for IsFavorite.\n\nWe have indexes on them, so we can SELECT every topic WHERE the user has\nwritten. Is it the good way of doing this?\n\n\nOh, I've made a mistake before, we have RAID 1 disks, not RAID 0.\n\n\n> \n> You'll probably find 8.3 helps this workload more than any tuning you\n> can do in the database though. Especially if you can reduce the number\n> of indexes and avoid an index on any flags that are being updated.\n\nI'll start this way, thanks. First 8.3, then I'll check my flags.\n\nI have a lot of ways to investigate and I would like to thank every\ncontributor here. I might come again with more precise information.\n\nThanks.\n", "msg_date": "Tue, 23 Jun 2009 22:04:15 +0200", "msg_from": "Mathieu Nebra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "On Tue, Jun 23, 2009 at 1:12 PM, Mathieu Nebra<[email protected]> wrote:\n> The flags table keeps track of every topic a member has visited and\n> remembers the last answer which was posted at this moment. It allows the\n> user to come back a few days after and immediately jump to the last\n> answer he has not read.\n\nI forgot to mention that we speed up our queries by caching the \"last\nread\" ID in Memcached. This is the kind of thing that Memcached is\nideal for.\n\nFor example, we show the list of the most recent posts, along with a\ncomment count, eg. \"42 comments (6 new)\". We found that joining posts\nagainst the last-read table is expensive, so instead we read from\nMemcached on every post to find the number of unread comments.\n\nWe use the thread's \"last commented at\" timestamp as part of the key\nso that when somebody posts a new comment, every user's cached unread\ncount is invalidated; it is automatically recalculated the next time\nthey view the post.\n\nA.\n", "msg_date": "Tue, 23 Jun 2009 22:13:41 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "Mathieu Nebra wrote:\n> Greg Stark a �crit :\n>> All the other comments are accurate, though it does seem like\n>> something the database ought to be able to handle.\n>>\n>> The other thing which hasn't been mentioned is that you have a lot of\n>> indexes. Updates require maintaining all those indexes. Are all of\n>> these indexes really necessary? Do you have routine queries which look\n>> up users based on their flags? Or all all your oltp transactions for\n>> specific userids in which case you probably just need the index on\n>> userid.\n> \n> \n> We are using these indexes, but I can't be sure if we _really_ need them\n> or not.\n> \n> I can go into detail. We have:\n> \n> UserID - TopicID - LastReadAnswerID - WrittenStatus - IsFavorite\n> \n> So basically, we toggle the boolean flag WrittenStatus when the user has\n> written in that topic. The same goes for IsFavorite.\n\nDo those last two columns hold much data? Another thing to consider is to split this into two tables:\n\n UserID - TopicID - LastReadAnswerID \n\n UserID - TopicID - WrittenStatus - IsFavorite\n\nAs others have pointed out, an UPDATE in Postgres is a select/delete/insert, and if you're updating just the LastReadAnswerID all the time, you're wasting time deleting and re-inserting a lot of data that never change (assuming they're not trivially small columns).\n\nThis might also solve the problem of too many indexes -- the table that's updated frequently would only have an index on (UserID, TopicID), so the update only affects one index.\n\nThen to minimize the impact on your app, create a view that looks like the original table for read-only apps.\n\nCraig\n", "msg_date": "Tue, 23 Jun 2009 14:29:04 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "On Tue, Jun 23, 2009 at 9:04 PM, Mathieu Nebra<[email protected]> wrote:\n> We have indexes on them, so we can SELECT every topic WHERE the user has\n> written. Is it the good way of doing this?\n\nI'm kind of skeptical that a simple index on userid,topic isn't\nsufficient to handle this case. But you would have to test it on\nactual data to be sure. It depends whether you have enough topics and\nenough userid,topic records for a given userid that scanning all the\ntopics for a given user is actually too slow.\n\nEven if it's necessary you might consider having a \"partial\" index on\nuser,topic WHERE writtenstatus instead of having a three-column index.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Wed, 24 Jun 2009 00:48:24 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "Mathieu Nebra wrote:\n> Alexander Staubo a écrit :\n> \n>> On Tue, Jun 23, 2009 at 1:12 PM, Mathieu Nebra<[email protected]> wrote:\n>> \n>>> This \"flags\" table has more or less the following fields:\n>>>\n>>> UserID - TopicID - LastReadAnswerID\n>>> \n>> We are doing pretty much same thing.\n>>\n>> \n>>> My problem is that everytime a user READS a topic, it UPDATES this flags\n>>> table to remember he has read it. This leads to multiple updates at the\n>>> same time on the same table, and an update can take a few seconds. This\n>>> is not acceptable for my users.\n>>> \n>> First of all, and I'm sure you thought of this, an update isn't needed\n>> every time a user reads a topic; only when there are new answers that\n>> need to be marked as read. So an \"update ... where last_read_answer_id\n>> < ?\" should avoid the need for an update.\n>> \n>\n> We don't work that way. We just \"remember\" he has read these answers and\n> then we can tell him \"there are no new messages for you to read\".\n> So we just need to write what he has read when he reads it.\n>\n> \n>> (That said, I believe PostgreSQL diffs tuple updates, so in practice\n>> PostgreSQL might not be writing anything if you run an \"update\" with\n>> the same value. I will let someone more intimate with the internal\n>> details of updates to comment on this.)\n>>\n>> Secondly, an update should not take \"a few seconds\". You might want to\n>> investigate this part before you turn to further optimizations.\n>> \n>\n> Yes, I know there is a problem but I don't know if I am competent enough\n> to tune PostgreSQL for that. It can take a while to understand the\n> problem, and I'm not sure I'll have the time for that.\n>\n> I am, however, opened to suggestions. Maybe I'm doing something wrong\n> somewhere.\n>\n> \n>> In our application we defer the updates to a separate asynchronous\n>> process using a simple queue mechanism, but in our case, we found that\n>> the updates are fast enough (in the order of a few milliseconds) not\n>> to warrant batching them into single transactions.\n>> \n>\n> A few milliseconds would be cool.\n> In fact, defering to another process is a good idea, but I'm not sure if\n> it is easy to implement. It would be great to have some sort of UPDATE\n> ... LOW PRIORITY to make the request non blocking.\n>\n> Thanks.\n>\n> \nI use pg_send_query() \n<http://ca2.php.net/manual/en/function.pg-send-query.php> in php to \nachieve this for a views counter. \"Script execution is not blocked while \nthe queries are executing.\"\n\nIt looks like this may just be a direct translation of PQsendQuery() \nfrom libpq. Your preferred language may have a function like this.\n\n\n\n\n\n\n\nMathieu Nebra wrote:\n\nAlexander Staubo a écrit :\n \n\nOn Tue, Jun 23, 2009 at 1:12 PM, Mathieu Nebra<[email protected]> wrote:\n \n\nThis \"flags\" table has more or less the following fields:\n\nUserID - TopicID - LastReadAnswerID\n \n\nWe are doing pretty much same thing.\n\n \n\nMy problem is that everytime a user READS a topic, it UPDATES this flags\ntable to remember he has read it. This leads to multiple updates at the\nsame time on the same table, and an update can take a few seconds. This\nis not acceptable for my users.\n \n\nFirst of all, and I'm sure you thought of this, an update isn't needed\nevery time a user reads a topic; only when there are new answers that\nneed to be marked as read. So an \"update ... where last_read_answer_id\n< ?\" should avoid the need for an update.\n \n\n\nWe don't work that way. We just \"remember\" he has read these answers and\nthen we can tell him \"there are no new messages for you to read\".\nSo we just need to write what he has read when he reads it.\n\n \n\n(That said, I believe PostgreSQL diffs tuple updates, so in practice\nPostgreSQL might not be writing anything if you run an \"update\" with\nthe same value. I will let someone more intimate with the internal\ndetails of updates to comment on this.)\n\nSecondly, an update should not take \"a few seconds\". You might want to\ninvestigate this part before you turn to further optimizations.\n \n\n\nYes, I know there is a problem but I don't know if I am competent enough\nto tune PostgreSQL for that. It can take a while to understand the\nproblem, and I'm not sure I'll have the time for that.\n\nI am, however, opened to suggestions. Maybe I'm doing something wrong\nsomewhere.\n\n \n\nIn our application we defer the updates to a separate asynchronous\nprocess using a simple queue mechanism, but in our case, we found that\nthe updates are fast enough (in the order of a few milliseconds) not\nto warrant batching them into single transactions.\n \n\n\nA few milliseconds would be cool.\nIn fact, defering to another process is a good idea, but I'm not sure if\nit is easy to implement. It would be great to have some sort of UPDATE\n... LOW PRIORITY to make the request non blocking.\n\nThanks.\n\n \n\nI use pg_send_query()\n<http://ca2.php.net/manual/en/function.pg-send-query.php> in php\nto achieve this for a views counter. \"Script execution is not blocked\nwhile the queries are executing.\"\n\nIt looks like this may just be a direct translation of PQsendQuery()\nfrom libpq. Your preferred language may have a function like this.", "msg_date": "Wed, 24 Jun 2009 00:16:38 -0700", "msg_from": "Chris St Denis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "Craig James a �crit :\n> Mathieu Nebra wrote:\n>> Greg Stark a �crit :\n>>> All the other comments are accurate, though it does seem like\n>>> something the database ought to be able to handle.\n>>>\n>>> The other thing which hasn't been mentioned is that you have a lot of\n>>> indexes. Updates require maintaining all those indexes. Are all of\n>>> these indexes really necessary? Do you have routine queries which look\n>>> up users based on their flags? Or all all your oltp transactions for\n>>> specific userids in which case you probably just need the index on\n>>> userid.\n>>\n>>\n>> We are using these indexes, but I can't be sure if we _really_ need them\n>> or not.\n>>\n>> I can go into detail. We have:\n>>\n>> UserID - TopicID - LastReadAnswerID - WrittenStatus - IsFavorite\n>>\n>> So basically, we toggle the boolean flag WrittenStatus when the user has\n>> written in that topic. The same goes for IsFavorite.\n> \n> Do those last two columns hold much data? Another thing to consider is\n> to split this into two tables:\n\nThe last two columns only store TRUE or FALSE, they're booleans. So\nyou're saying that an index on them might be useless ? We're retrieving\n1000-2000 rows max and we need to extract only those who have TRUE on\nthe last column for example.\n\n> \n> UserID - TopicID - LastReadAnswerID\n> UserID - TopicID - WrittenStatus - IsFavorite\n> \n> As others have pointed out, an UPDATE in Postgres is a\n> select/delete/insert, and if you're updating just the LastReadAnswerID\n> all the time, you're wasting time deleting and re-inserting a lot of\n> data that never change (assuming they're not trivially small columns).\n\nThey are trivially small columns.\n\n> \n> This might also solve the problem of too many indexes -- the table\n> that's updated frequently would only have an index on (UserID, TopicID),\n> so the update only affects one index.\n\nI'll investigate that way.\n\n> \n> Then to minimize the impact on your app, create a view that looks like\n> the original table for read-only apps.\n\nGood idea, thanks again.\n", "msg_date": "Wed, 24 Jun 2009 09:42:30 +0200", "msg_from": "Mathieu Nebra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How would you store read/unread topic status?" }, { "msg_contents": "\n>> \n>>> In our application we defer the updates to a separate asynchronous\n>>> process using a simple queue mechanism, but in our case, we found that\n>>> the updates are fast enough (in the order of a few milliseconds) not\n>>> to warrant batching them into single transactions.\n>>> \n>>\n>> A few milliseconds would be cool.\n>> In fact, defering to another process is a good idea, but I'm not sure if\n>> it is easy to implement. It would be great to have some sort of UPDATE\n>> ... LOW PRIORITY to make the request non blocking.\n>>\n>> Thanks.\n>>\n>> \n> I use pg_send_query()\n> <http://ca2.php.net/manual/en/function.pg-send-query.php> in php to\n> achieve this for a views counter. \"Script execution is not blocked while\n> the queries are executing.\"\n> \n> It looks like this may just be a direct translation of PQsendQuery()\n> from libpq. Your preferred language may have a function like this.\n> \n\nI am using PHP. That was one of the thing I was looking for, thank you! :)\nWe'll combine this with a memcached solution so we just update every\n1000 views for example.\n\n", "msg_date": "Wed, 24 Jun 2009 10:08:09 +0200", "msg_from": "Mathieu Nebra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How would you store read/unread topic status?" } ]
[ { "msg_contents": "Hi there,\n\nPlease help me to make a decision on how to manage users.\n\nFor some reason it is easier in the project I'm working on to split data \nby schemes and assign them to Postgres' users (I mean those created with \nCREATE USER) rather than support 'owner' fields referring to a global \nusers table.\n\nThe question is what could be the consequences of having a large number \nof them (tens of thousands)?\n\nContext:\n\n- it is a web app\n- thousands of concurrent requests from different users\n- amount of user's data in the db is relatively small\n\nConcerns:\n\n- how big is the performance/memory penalty on switching users in the \nsame connection (connections are reused of course)?\n- will it hurt the cache?\n- are prepared statements kept per user or per connection?\n- is the query planner global or somehow tied to users?\n\nI'd be glad to hear any opinions/suggestions.\n\nBest regards,\nMike\n\n\n\n\n\n", "msg_date": "Tue, 23 Jun 2009 12:39:55 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": true, "msg_subject": "Implications of having large number of users" }, { "msg_contents": "Mike Ivanov wrote:\n> Please help me to make a decision on how to manage users.\n> \n> For some reason it is easier in the project I'm working on to split data \n> by schemes and assign them to Postgres' users (I mean those created with \n> CREATE USER) rather than support 'owner' fields referring to a global \n> users table.\n\nYou know that (unlike in Oracle) user and schema is not coupled in\nPostgreSQL, right? So you can have one user owning tables in various schemata\nand many users owning tables in one schema.\n\n> The question is what could be the consequences of having a large number \n> of them (tens of thousands)?\n\nIt shouldn't be a problem.\nThe only critical number is the number of concurrent connections\nat a given time.\n\n> Context:\n> \n> - it is a web app\n> - thousands of concurrent requests from different users\n> - amount of user's data in the db is relatively small\n> \n> Concerns:\n> \n> - how big is the performance/memory penalty on switching users in the \n> same connection (connections are reused of course)?\n> - will it hurt the cache?\n> - are prepared statements kept per user or per connection?\n> - is the query planner global or somehow tied to users?\n> \n> I'd be glad to hear any opinions/suggestions.\n\nYou cannot keep the connection and change users.\nA change of database user always means a new connection and a new backend\nprocess.\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 24 Jun 2009 10:32:19 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Implications of having large number of users" }, { "msg_contents": "On Jun 24, 2009, at 4:32 AM, \"Albe Laurenz\" <[email protected]> \nwrote:\n\n> Mike Ivanov wrote:\n>> Please help me to make a decision on how to manage users.\n>>\n>> For some reason it is easier in the project I'm working on to split \n>> data\n>> by schemes and assign them to Postgres' users (I mean those created \n>> with\n>> CREATE USER) rather than support 'owner' fields referring to a global\n>> users table.\n>\n> You know that (unlike in Oracle) user and schema is not coupled in\n> PostgreSQL, right? So you can have one user owning tables in various \n> schemata\n> and many users owning tables in one schema.\n>\n>> The question is what could be the consequences of having a large \n>> number\n>> of them (tens of thousands)?\n>\n> It shouldn't be a problem.\n> The only critical number is the number of concurrent connections\n> at a given time.\n>\n>> Context:\n>>\n>> - it is a web app\n>> - thousands of concurrent requests from different users\n>> - amount of user's data in the db is relatively small\n>>\n>> Concerns:\n>>\n>> - how big is the performance/memory penalty on switching users in the\n>> same connection (connections are reused of course)?\n>> - will it hurt the cache?\n>> - are prepared statements kept per user or per connection?\n>> - is the query planner global or somehow tied to users?\n>>\n>> I'd be glad to hear any opinions/suggestions.\n\nA bunch of small tables might possibly take up more space than a \nsmaller number of larger tables, increasing memory requirements...\n\n> You cannot keep the connection and change users.\n> A change of database user always means a new connection and a new \n> backend\n> process.\n\nI don't think this is true. You can use SET SESSION AUTHORIZATION, \nright?\n\n...Robert\n", "msg_date": "Wed, 24 Jun 2009 07:30:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Implications of having large number of users" }, { "msg_contents": "Robert Haas wrote:\n> > You cannot keep the connection and change users.\n> > A change of database user always means a new connection and a new \n> > backend process.\n> \n> I don't think this is true. You can use SET SESSION AUTHORIZATION, \n> right?\n\nYou are right, I overlooked that.\nIt is restricted to superusers though.\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 24 Jun 2009 15:02:57 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Implications of having large number of users" }, { "msg_contents": "\"Albe Laurenz\" <[email protected]> writes:\n> Robert Haas wrote:\n>> I don't think this is true. You can use SET SESSION AUTHORIZATION, \n>> right?\n\n> You are right, I overlooked that.\n> It is restricted to superusers though.\n\nThat sort of thing is only workable if you have trustworthy client code\nthat controls what queries the users can issue. If someone can send raw\nSQL commands then he just needs to do RESET SESSION AUTHORIZATION to\nbecome superuser.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Jun 2009 09:52:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Implications of having large number of users " }, { "msg_contents": "On Wed, Jun 24, 2009 at 9:52 AM, Tom Lane<[email protected]> wrote:\n> \"Albe Laurenz\" <[email protected]> writes:\n>> Robert Haas wrote:\n>>> I don't think this is true.  You can use SET SESSION AUTHORIZATION,\n>>> right?\n>\n>> You are right, I overlooked that.\n>> It is restricted to superusers though.\n>\n> That sort of thing is only workable if you have trustworthy client code\n> that controls what queries the users can issue.  If someone can send raw\n> SQL commands then he just needs to do RESET SESSION AUTHORIZATION to\n> become superuser.\n\nGood point, although since the OP said it was a webapp, they probably\nhave control over that.\n\n...Robert\n", "msg_date": "Wed, 24 Jun 2009 10:30:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Implications of having large number of users" }, { "msg_contents": "\n> I'd be glad to hear any opinions/suggestions.\n\nMany thanks to everyone who responded!\n\nMike\n\n", "msg_date": "Thu, 25 Jun 2009 11:18:31 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Implications of having large number of users" } ]
[ { "msg_contents": "Sorry, just in case anyone is filtering on that in the subject line ...\n\nOn Tue, Jun 23, 2009 at 4:41 PM, Alan McKay<[email protected]> wrote:\n> BTW, our designer got the nytprofile or whatever it is called for Perl\n> and found out that it was a problem with the POE library that was\n> being used as a state-machine to drive the whole load suite.   It was\n> taking something like 95% of the CPU time!\n>\n> On Fri, Jun 19, 2009 at 11:59 AM, Alan McKay<[email protected]> wrote:\n>> Hey folks,\n>>\n>> I'm new to all this stuff, and am sitting here with kSar looking at\n>> some graphed results of some load tests we did, trying to figure\n>> things out :-)\n>>\n>> We got some unsatisfactory results in stressing our system, and now I\n>> have to divine where the bottleneck is.\n>>\n>> We did 4 tests, upping the load each time.   The 3rd and 4th ones have\n>> all 8 cores pegged at about 95%.  Yikes!\n>>\n>> In the first test the processor running queue spikes at 7 and maybe\n>> averages 4 or 5\n>>\n>> In the last test it spikes at 33 with an average maybe 25.\n>>\n>> Looks to me like it could be a CPU bottleneck.  But I'm new at this :-)\n>>\n>> Is there a general rule of thumb \"if queue is longer than X, it is\n>> likely a bottleneck?\"\n>>\n>> In reading an IBM Redbook on Linux performance, I also see this :\n>> \"High numbers of context switches in connection with a large number of\n>> interrupts can signal driver or application issues.\"\n>>\n>> On my first test where the CPU is not pegged, context switching goes\n>> from about 3700 to about 4900, maybe averaging 4100\n>>\n>> On the pegged test, the values are maybe 10% higher than that, maybe 15%.\n>>\n>> It is an IBM 3550 with 8 cores, 2660.134 MHz (from dmesg), 32Gigs RAM\n>>\n>> thanks,\n>> -Alan\n>>\n>> --\n>> “Don't eat anything you've ever seen advertised on TV”\n>>         - Michael Pollan, author of \"In Defense of Food\"\n>>\n>\n>\n>\n> --\n> “Don't eat anything you've ever seen advertised on TV”\n>         - Michael Pollan, author of \"In Defense of Food\"\n>\n\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Tue, 23 Jun 2009 19:21:38 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "SOLVED: processor running queue - general rule of thumb?" } ]
[ { "msg_contents": "Is tsvector_update_trigger() smart enough to not bother updating a \ntsvector if the text in that column has not changed?\n\nIf not, can I make my own update trigger with something like\n\n if new.description != old.description\n return tsvector_update_trigger('fti_all', 'pg_catalog.english',\n 'title', 'keywords', 'description');\n else\n return new;\n\nor do I need to do it from scratch?\n\n\nI'm seeing very high cpu load on my database server and my current \ntheory is that some of the triggers may be causing it.\n\n\n\n\n\n\nIs tsvector_update_trigger() smart enough to not bother updating a\ntsvector if the text in that column has not changed?\n\nIf not, can I make my own update trigger with something like\nif new.description != old.description\n    return tsvector_update_trigger('fti_all', 'pg_catalog.english',\n'title', 'keywords', 'description');\nelse\n    return new;\n\nor do I need to do it from scratch?\n\n\nI'm seeing very high cpu load on my database server and my current\ntheory is that some of the triggers may be causing it.", "msg_date": "Wed, 24 Jun 2009 00:20:18 -0700", "msg_from": "Chris St Denis <[email protected]>", "msg_from_op": true, "msg_subject": "tsvector_update_trigger performance?" }, { "msg_contents": "On Wed, 24 Jun 2009, Chris St Denis wrote:\n\n> Is tsvector_update_trigger() smart enough to not bother updating a tsvector \n> if the text in that column has not changed?\n\nno, you should do check yourself. There are several examples in mailing lists.\n\n>\n> If not, can I make my own update trigger with something like\n>\n> if new.description != old.description\n> return tsvector_update_trigger('fti_all', 'pg_catalog.english',\n> 'title', 'keywords', 'description');\n> else\n> return new;\n>\n> or do I need to do it from scratch?\n>\n>\n> I'm seeing very high cpu load on my database server and my current theory is \n> that some of the triggers may be causing it.\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Wed, 24 Jun 2009 11:27:21 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsvector_update_trigger performance?" }, { "msg_contents": "Oleg Bartunov wrote:\n> On Wed, 24 Jun 2009, Chris St Denis wrote:\n>\n>> Is tsvector_update_trigger() smart enough to not bother updating a \n>> tsvector if the text in that column has not changed?\n>\n> no, you should do check yourself. There are several examples in mailing lists.\n\nOr you could try using the supress_redundant_updates_trigger() function\nthat has been included in 8.4 (should be easy to backport)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 24 Jun 2009 12:29:15 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsvector_update_trigger performance?" }, { "msg_contents": "Hi,\n\nLe 24 juin 09 � 18:29, Alvaro Herrera a �crit :\n> Oleg Bartunov wrote:\n>> On Wed, 24 Jun 2009, Chris St Denis wrote:\n>>\n>>> Is tsvector_update_trigger() smart enough to not bother updating a\n>>> tsvector if the text in that column has not changed?\n>>\n>> no, you should do check yourself. There are several examples in \n>> mailing lists.\n>\n> Or you could try using the supress_redundant_updates_trigger() \n> function\n> that has been included in 8.4 (should be easy to backport)\n\n http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/backports/min_update/\n http://blog.tapoueh.org/projects.html#sec9\n\nBut it won't handle the case where some other random column has \nchanged, but the UPDATE is not affecting the text indexed...\n-- \ndim", "msg_date": "Wed, 24 Jun 2009 22:03:21 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsvector_update_trigger performance?" }, { "msg_contents": "Dimitri Fontaine wrote:\n> Hi,\n>\n> Le 24 juin 09 � 18:29, Alvaro Herrera a �crit :\n>> Oleg Bartunov wrote:\n>>> On Wed, 24 Jun 2009, Chris St Denis wrote:\n>>>\n>>>> Is tsvector_update_trigger() smart enough to not bother updating a\n>>>> tsvector if the text in that column has not changed?\n>>>\n>>> no, you should do check yourself. There are several examples in \n>>> mailing lists.\n>>\n>> Or you could try using the supress_redundant_updates_trigger() function\n>> that has been included in 8.4 (should be easy to backport)\n>\n> http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/backports/min_update/\n> http://blog.tapoueh.org/projects.html#sec9\n>\n> But it won't handle the case where some other random column has \n> changed, but the UPDATE is not affecting the text indexed...\nTho this looks useful for some things, it doesn't solve my specific \nproblem any. But thanks for the suggestion anyway.\n\nThis sounds like something that should just be on by default, not a \ntrigger. Is there some reason it would waste the io of writing a new row \nto disk if nothing has changed? or is it just considered too much \nunnecessary overhead to compare them?\n", "msg_date": "Wed, 24 Jun 2009 21:03:08 -0700", "msg_from": "Chris St Denis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tsvector_update_trigger performance?" }, { "msg_contents": "On Wed, 2009-06-24 at 21:03 -0700, Chris St Denis wrote:\n> This sounds like something that should just be on by default, not a \n> trigger. Is there some reason it would waste the io of writing a new row \n> to disk if nothing has changed? or is it just considered too much \n> unnecessary overhead to compare them?\n\nI think the theory is that carefully written applications generally do\nnot generate redundant updates in the first place. An application that\navoids redundant updates should not have to pay the cost of redundant\nupdate detection and elimination.\n\n-- \nCraig Ringer\n\n", "msg_date": "Thu, 25 Jun 2009 13:45:19 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsvector_update_trigger performance?" }, { "msg_contents": "Also consider on update triggers that you could want to run anyway\n\n-- \ndim\n\nLe 25 juin 2009 à 07:45, Craig Ringer <[email protected]> a \nécrit :\n\n> On Wed, 2009-06-24 at 21:03 -0700, Chris St Denis wrote:\n>> This sounds like something that should just be on by default, not a\n>> trigger. Is there some reason it would waste the io of writing a \n>> new row\n>> to disk if nothing has changed? or is it just considered too much\n>> unnecessary overhead to compare them?\n>\n> I think the theory is that carefully written applications generally do\n> not generate redundant updates in the first place. An application that\n> avoids redundant updates should not have to pay the cost of redundant\n> update detection and elimination.\n>\n> -- \n> Craig Ringer\n>\n", "msg_date": "Thu, 25 Jun 2009 08:55:40 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsvector_update_trigger performance?" } ]
[ { "msg_contents": "Morning all,\n A colleague here tried to post this yesterday but it was stalled for\nsome reason. Anyway, here's what we're seeing which hopefully someone\nhas some pointers for.\n \nEssentially, we're seeing a query plan that is taking 95 secs with a\nnested loop execution plan and 1 sec with a merge join plan. We've\ntried increasing the default_statistics_target to 1000 and re-analyzed\nbut the same query plan is returned. If we then force nested loops off\n(set enable_nestloop=false), the optimizer chooses the better plan and\nexecution is under 1 second.\n \n\"Default\" explain plan: http://explain.depesz.com/s/a3\n<http://explain.depesz.com/s/a3> (execution time 95secs)\n \n\"Nested loops off\" plan: http://explain.depesz.com/s/JV\n<http://explain.depesz.com/s/JV> (execution time ~ 1sec)\n \nWe're currently running 8.1.8 (yeah, we know it's old skool but it's\nembedded as part of an application) so the real questions are:\n \nIs there further optimizations we can do to change the plan?\nIs this perhaps addressed in a later release?\n \nSome postgresql.conf settings that might be useful:\n \neffective_cache_size 511082\n shared_buffers 30000\n work_mem 4096\n random_page_cost 4\n join_collapse_limit 8\n \n \nand of course, the query in question that generates the plan:\n \nSELECT web_user_type,\n web_user.web_user_id as id,\n cast(web_user_property_node.prop_val as numeric) as node_id ,\n node_name,\n last_name || ', ' || first_name as name,\n web_user_property_directory_inbox.prop_val as\ndirectory_location_inbox,\n web_user_property_directory_outbox.prop_val as\ndirectory_location_outbox,\n username,\n first_name,\n last_name,\n email\nFROM\n web_user LEFT JOIN web_user_property as\nweb_user_property_directory_outbox ON web_user.web_user_id =\nweb_user_property_directory_outbox.web_user_id AND \n web_user_property_directory_outbox.prop_key like\n'location_node_directory_outbox', web_user_property, web_user_property\nas web_user_property_directory_inbox,\n web_user_property as web_user_property_node, node WHERE\nweb_user.web_user_id = web_user_property_directory_inbox.web_user_id AND\nweb_user.web_user_id = web_user_property.web_user_id AND\nweb_user_property.prop_key = 'location_node_enabled' AND\nweb_user_property.prop_val = 'true' AND\nweb_user_property_directory_inbox.prop_key like\n'location_node_directory_inbox' AND web_user.web_user_id =\nweb_user_property_node.web_user_id AND web_user_property_node.prop_key\nlike 'location_node_id' AND web_user_property_node.prop_val =\nnode.node_id AND (first_name ilike '%' OR last_name ilike '%' OR \n last_name || ',' || first_name ilike '%') AND node.node_id IN (\nSELECT node_id FROM node_execute \n WHERE acl_web_user_id = 249) AND\nweb_user.web_user_id IN ( SELECT web_user_id FROM web_user_read \n WHERE acl_web_user_id = 249 OR \n web_user_id IN ( SELECT member_id FROM web_user_grp_member \n WHERE web_user_id IN( SELECT acl_web_user_id \n FROM web_user_read \n WHERE web_user_id IN\n(SELECT web_user_id FROM web_user_grp_member \n WHERE member_id =\n249)))) ORDER BY name;\n\n \nThanks in advance\n \nDave\n \nDave North\[email protected]\n\n\n\n\n\nMorning \nall,\n    A \ncolleague here tried to post this yesterday but it was stalled for some \nreason.  Anyway, here's what we're seeing which hopefully someone has some \npointers for.\n \nEssentially, \nwe're seeing a query plan that is taking 95 secs with a nested loop \nexecution plan and 1 sec with a merge join plan.  We've tried increasing the \ndefault_statistics_target to 1000 and re-analyzed but the same query plan is \nreturned.  If we then force nested loops off (set enable_nestloop=false), \nthe optimizer chooses the better plan and execution is under 1 \nsecond.\n \n\"Default\" \nexplain plan: http://explain.depesz.com/s/a3  (execution time \n95secs)\n \n\"Nested loops \noff\" plan: http://explain.depesz.com/s/JV (execution time ~ \n1sec)\n \nWe're currently \nrunning 8.1.8 (yeah, we know it's old skool but it's embedded as part of an \napplication) so the real questions are:\n \nIs there further \noptimizations we can do to change the plan?\nIs this perhaps \naddressed in a later release?\n \nSome postgresql.conf settings that might be \nuseful:  \neffective_cache_size           \n511082 shared_buffers                  \n30000 work_mem                         \n4096 random_page_cost                    \n4 join_collapse_limit                 \n8\n \n \nand of course, the query in question that generates the \nplan:\n \nSELECT web_user_type,  \nweb_user.web_user_id as id,  cast(web_user_property_node.prop_val as \nnumeric) as node_id ,  node_name,  last_name || ', ' || \nfirst_name  as name,  web_user_property_directory_inbox.prop_val \nas directory_location_inbox,  \nweb_user_property_directory_outbox.prop_val as \ndirectory_location_outbox,  username,  first_name,  \nlast_name,  emailFROM web_user LEFT JOIN web_user_property \nas web_user_property_directory_outbox ON web_user.web_user_id = \nweb_user_property_directory_outbox.web_user_id AND    \nweb_user_property_directory_outbox.prop_key like \n'location_node_directory_outbox',  web_user_property,  \nweb_user_property as \nweb_user_property_directory_inbox, web_user_property as \nweb_user_property_node,  node WHERE  web_user.web_user_id = \nweb_user_property_directory_inbox.web_user_id AND  web_user.web_user_id = \nweb_user_property.web_user_id  AND  web_user_property.prop_key = \n'location_node_enabled' AND  web_user_property.prop_val = 'true' AND  \nweb_user_property_directory_inbox.prop_key like 'location_node_directory_inbox' \nAND  web_user.web_user_id = web_user_property_node.web_user_id AND  \nweb_user_property_node.prop_key like 'location_node_id' AND  \nweb_user_property_node.prop_val = node.node_id AND  (first_name ilike '%' \nOR last_name ilike '%' OR    last_name || ',' || first_name ilike \n'%') AND  node.node_id  IN ( SELECT node_id FROM node_execute \n                      \nWHERE acl_web_user_id = 249) AND  web_user.web_user_id IN ( SELECT \nweb_user_id FROM web_user_read \n                             \nWHERE acl_web_user_id  = 249  OR    web_user_id IN ( \nSELECT member_id FROM web_user_grp_member \n                     \nWHERE web_user_id IN( SELECT acl_web_user_id \n                              \nFROM web_user_read \n                                             \nWHERE web_user_id IN (SELECT web_user_id FROM web_user_grp_member  \n                                                \nWHERE member_id = 249))))  ORDER BY name;\n \nThanks in \nadvance\n \nDave\n \nDave \nNorth\[email protected]", "msg_date": "Wed, 24 Jun 2009 07:43:23 -0500", "msg_from": "\"Dave North\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested Loop \"Killer\" on 8.1" }, { "msg_contents": "Dave,\n\n> Is there further optimizations we can do to change the plan?\n> Is this perhaps addressed in a later release?\n\nGiven the left joins, a later release might help; I know we did a lot to \nimprove left join plans in 8.3. It would be worth testing if you can \ntest an upgrade easily.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 25 Jun 2009 13:16:42 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop \"Killer\" on 8.1" }, { "msg_contents": "On Wed, Jun 24, 2009 at 1:43 PM, Dave North<[email protected]> wrote:\n\n> Essentially, we're seeing a query plan that is taking 95 secs with a nested\n> loop execution plan and 1 sec with a merge join plan.  We've tried\n> increasing the default_statistics_target to 1000 and re-analyzed but the\n> same query plan is returned.  If we then force nested loops off (set\n> enable_nestloop=false), the optimizer chooses the better plan and execution\n> is under 1 second.\n>\n> \"Default\" explain plan: http://explain.depesz.com/s/a3  (execution time\n> 95secs)\n>\n> \"Nested loops off\" plan: http://explain.depesz.com/s/JV (execution time ~\n> 1sec)\n\nThe planner is coming up with a bad estimate for the number of rows\nmatching this filter:\n\nFilter: ((prop_key)::text ~~ 'location_node_directory_outbox'::text)\n\nWhich is coming from this condition:\n\n> AND\n>    web_user_property_directory_outbox.prop_key like\n> 'location_node_directory_outbox'\n\nWhy use \"like\" for a constant string with no % or _ characters? If you\nused = the planner might be able to come up with a better estimate.\n\nThat said I suspect Dave's right that your best course of action would\nbe to update to 8.3 or wait a couple weeks and update to 8.4 when it\ncomes out.\n\nRegardless you *really* want to update your 8.1.8 install to the\nlatest bug-fix release (currently 8.1.17). That's not an upgrade and\nwon't need a dump/reload.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Thu, 25 Jun 2009 21:36:43 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop \"Killer\" on 8.1" }, { "msg_contents": "On 06/25/2009 04:36 PM, Greg Stark wrote:\n>> AND\n>> web_user_property_directory_outbox.prop_key like\n>> 'location_node_directory_outbox'\n>> \n>\n> Why use \"like\" for a constant string with no % or _ characters? If you\n> used = the planner might be able to come up with a better estimate\n\nAny reason why \"like\" with a constant string without % or _ is not \noptimized to = today?\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n\n\n\n\n\n\nOn 06/25/2009 04:36 PM, Greg Stark wrote:\n\n\nAND\n   web_user_property_directory_outbox.prop_key like\n'location_node_directory_outbox'\n \n\n\nWhy use \"like\" for a constant string with no % or _ characters? If you\nused = the planner might be able to come up with a better estimate\n\n\nAny reason why \"like\" with a constant string without % or _ is not\noptimized to = today?\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Thu, 25 Jun 2009 16:57:42 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop \"Killer\" on 8.1" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> On Wed, Jun 24, 2009 at 1:43 PM, Dave North<[email protected]> wrote:\n> Why use \"like\" for a constant string with no % or _ characters? If you\n> used = the planner might be able to come up with a better estimate.\n\nUh, it appears to me the string *does* contain _ characters; perhaps the\nOP has neglected to escape those?\n\nThe planner does know enough to estimate LIKE with a fixed pattern as\nbeing equivalent to =. I think it knew that even back in 8.1, but am\ntoo lazy to look right now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Jun 2009 17:05:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop \"Killer\" on 8.1 " }, { "msg_contents": "On Thu, Jun 25, 2009 at 10:05 PM, Tom Lane<[email protected]> wrote:\n>\n> Uh, it appears to me the string *does* contain _ characters; perhaps the\n> OP has neglected to escape those?\n\nSigh. Indeed.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Thu, 25 Jun 2009 22:30:05 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop \"Killer\" on 8.1" }, { "msg_contents": "Greg/Tom/Josh,\n\tThanks for your comments about this problem...very much\nappreciated. We have resolve the issue by re-doing the query partly\nbased on your advice and partly just spending more time in analysis.\nThere's one oddball thing we turned up which I'm including below in the\nfull series of steps we did to optimize things around the \"explain\"\nfunctionality.\n\n\n1) The original query (89 rows returned) with an EXPLAIN ANALYZE takes\nover 300 secs. Without the explain analyze, it runs in 45 seconds.\nWith nested loops disabled (and hence forcing a merge), it completes in\nunder 1 second.\n\nThe outstanding question here is why does the explain analyze take\n(quite a bit) longer than just executing the query?\n\n2) Removing the LEFT JOIN (89 rows returned)\n\t- lowered query execution time to 37 secs\n\n3) Changing the 3 occurrences of (prop_key LIKE 'string...') to = \n\t- row estimate improved from 1 to 286 \n\t- query execution time still at 37 secs \n\n4) Adding a DISTINCT to the IN subquery on \n\t- records returned in subquery changes from 2194 to 112. \n \t- ... web_user.web_user_id IN (SELECT DISTINCT web_user_id\n\t- query execution time falls to 1 sec. \n\nWe then ran a totally unscientific test (unscientific because this was\non a different machine, different OS, etc.) just to see if there was any\ndifference between newer versions of Postgres and that which is bundled\nwith the application.\n\nUsing 8.3 on a Windows desktop\n - original query executes in 7 secs\n - improved query executes in 6 secs\n\nSo it seems there may well be some changes in newer versions which we\ncan take advantage of. More fuel to look into upgrading the embedded\ndatabase version ;)\n\nAgain, thanks all for the input.\n\nRegards\nDave\n \n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf \n> Of Greg Stark\n> Sent: June 25, 2009 5:30 PM\n> To: Tom Lane\n> Cc: Dave North; [email protected]\n> Subject: Re: [PERFORM] Nested Loop \"Killer\" on 8.1\n> \n> On Thu, Jun 25, 2009 at 10:05 PM, Tom Lane<[email protected]> wrote:\n> >\n> > Uh, it appears to me the string *does* contain _ \n> characters; perhaps \n> > the OP has neglected to escape those?\n> \n> Sigh. Indeed.\n> \n> --\n> greg\n> http://mit.edu/~gsstark/resume.pdf\n> \n", "msg_date": "Fri, 26 Jun 2009 12:39:48 -0500", "msg_from": "\"Dave North\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested Loop \"Killer\" on 8.1" }, { "msg_contents": "\"Dave North\" <[email protected]> writes:\n> The outstanding question here is why does the explain analyze take\n> (quite a bit) longer than just executing the query?\n\nEXPLAIN ANALYZE has nontrivial measurement overhead, especially on\nplatforms with slow gettimeofday(). Old/cheap PC hardware, in particular,\ntends to suck in this respect.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jun 2009 18:22:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop \"Killer\" on 8.1 " } ]
[ { "msg_contents": "Hello,\n\nI have a table like following. To increase the performance of this\ntable, I would like to create CLUSTER.\nFirst, Which index should I use on this table for CLUSTER?\nSecondly, Can I create multiple CLUSTER on the same table?\nI will appreciate, if you can suggest other options to increase the\nperformance of the table.\nI use this table to save metadata of the mails on my system.\n\n\nmail=# \\d maillogs\n Table \"public.maillogs\"\n Column | Type |\n Modifiers\n--------------------+-----------------------------+-------------------------------------------------------\n id | bigint | not null default\nnextval('maillogs_id_seq'::regclass)\n queueid | character varying(255) | not null default\n'*'::character varying\n recvtime | timestamp without time zone | default now()\n remoteip | character varying(128) | not null default\n'0.0.0.0'::character varying\n relayflag | smallint | not null default\n(0)::smallint\n retaction | integer |\n retval | integer | not null default 0\n probspam | double precision | not null default\n(0)::double precision\n messageid | text |\n fromaddress | text | not null\n toaddress | text | not null\n envelopesender | text |\n enveloperecipients | text |\n messagesubject | text |\n size | bigint |\n logstr | character varying(1024) |\n destinationaddress | character varying(255) |\n quarantinepath | character varying(1024) | not null default\n''::character varying\n backuppath | character varying(1024) | not null default\n''::character varying\n quarantineflag | smallint | not null default\n(0)::smallint\n backupflag | smallint | not null default\n(0)::smallint\n deletedflag | smallint | not null default 0\n profileid | integer | not null default 0\nIndexes:\n \"maillogs_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"idx_maillogs_backupflag\" btree (backupflag)\n \"idx_maillogs_deletedflag\" btree (deletedflag)\n \"idx_maillogs_enveloperecipients\" btree (enveloperecipients)\n \"idx_maillogs_envelopesender\" btree (envelopesender)\n \"idx_maillogs_messagesubject\" btree (messagesubject)\n \"idx_maillogs_quarantineflag\" btree (quarantineflag)\n \"idx_maillogs_recvtime\" btree (recvtime)\n \"idx_maillogs_remoteip\" btree (remoteip)\n \"idx_maillogs_revtal\" btree (retval)\nForeign-key constraints:\n \"maillogs_profileid_fkey\" FOREIGN KEY (profileid) REFERENCES\nprofiles(profileid)\nTriggers:\n maillogs_insert AFTER INSERT ON maillogs FOR EACH ROW EXECUTE\nPROCEDURE maillogs_insert()\n\nmail=#\n", "msg_date": "Wed, 24 Jun 2009 20:32:14 +0300", "msg_from": "Ibrahim Harrani <[email protected]>", "msg_from_op": true, "msg_subject": "cluster index on a table" }, { "msg_contents": "Clustering reorganizes the layout of a table according to\nthe ordering of a SINGLE index. This will place items that\nare adjacent in the index adjacent in the heap. So you need\nto cluster on the index that will help the locality of\nreference for the queries which will benefit you the most.\nExecution time sensitive queries are a good way to choose.\n\nCheers,\nKen\n\nOn Wed, Jun 24, 2009 at 08:32:14PM +0300, Ibrahim Harrani wrote:\n> Hello,\n> \n> I have a table like following. To increase the performance of this\n> table, I would like to create CLUSTER.\n> First, Which index should I use on this table for CLUSTER?\n> Secondly, Can I create multiple CLUSTER on the same table?\n> I will appreciate, if you can suggest other options to increase the\n> performance of the table.\n> I use this table to save metadata of the mails on my system.\n> \n> \n> mail=# \\d maillogs\n> Table \"public.maillogs\"\n> Column | Type |\n> Modifiers\n> --------------------+-----------------------------+-------------------------------------------------------\n> id | bigint | not null default\n> nextval('maillogs_id_seq'::regclass)\n> queueid | character varying(255) | not null default\n> '*'::character varying\n> recvtime | timestamp without time zone | default now()\n> remoteip | character varying(128) | not null default\n> '0.0.0.0'::character varying\n> relayflag | smallint | not null default\n> (0)::smallint\n> retaction | integer |\n> retval | integer | not null default 0\n> probspam | double precision | not null default\n> (0)::double precision\n> messageid | text |\n> fromaddress | text | not null\n> toaddress | text | not null\n> envelopesender | text |\n> enveloperecipients | text |\n> messagesubject | text |\n> size | bigint |\n> logstr | character varying(1024) |\n> destinationaddress | character varying(255) |\n> quarantinepath | character varying(1024) | not null default\n> ''::character varying\n> backuppath | character varying(1024) | not null default\n> ''::character varying\n> quarantineflag | smallint | not null default\n> (0)::smallint\n> backupflag | smallint | not null default\n> (0)::smallint\n> deletedflag | smallint | not null default 0\n> profileid | integer | not null default 0\n> Indexes:\n> \"maillogs_pkey\" PRIMARY KEY, btree (id) CLUSTER\n> \"idx_maillogs_backupflag\" btree (backupflag)\n> \"idx_maillogs_deletedflag\" btree (deletedflag)\n> \"idx_maillogs_enveloperecipients\" btree (enveloperecipients)\n> \"idx_maillogs_envelopesender\" btree (envelopesender)\n> \"idx_maillogs_messagesubject\" btree (messagesubject)\n> \"idx_maillogs_quarantineflag\" btree (quarantineflag)\n> \"idx_maillogs_recvtime\" btree (recvtime)\n> \"idx_maillogs_remoteip\" btree (remoteip)\n> \"idx_maillogs_revtal\" btree (retval)\n> Foreign-key constraints:\n> \"maillogs_profileid_fkey\" FOREIGN KEY (profileid) REFERENCES\n> profiles(profileid)\n> Triggers:\n> maillogs_insert AFTER INSERT ON maillogs FOR EACH ROW EXECUTE\n> PROCEDURE maillogs_insert()\n> \n> mail=#\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n", "msg_date": "Wed, 24 Jun 2009 12:40:26 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "As another poster pointed out, you cluster on ONE index and one index\nonly. However, you can cluster on a multi-column index.\n", "msg_date": "Wed, 24 Jun 2009 11:42:08 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Hi,\n\nthanks for your suggestion.\nIs there any benefit of setting fillfactor to 70 or 80 on this table?\n\n\n\nOn Wed, Jun 24, 2009 at 8:42 PM, Scott Marlowe<[email protected]> wrote:\n> As another poster pointed out, you cluster on ONE index and one index\n> only.  However, you can cluster on a multi-column index.\n>\n", "msg_date": "Wed, 15 Jul 2009 18:04:23 +0300", "msg_from": "Ibrahim Harrani <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "If you have a lot of insert/update/delete activity on a table fillfactor can help.\n\nI don't believe that postgres will try and maintain the table in the cluster order however.\n\n\nOn 7/15/09 8:04 AM, \"Ibrahim Harrani\" <[email protected]> wrote:\n\nHi,\n\nthanks for your suggestion.\nIs there any benefit of setting fillfactor to 70 or 80 on this table?\n\n\n\nOn Wed, Jun 24, 2009 at 8:42 PM, Scott Marlowe<[email protected]> wrote:\n> As another poster pointed out, you cluster on ONE index and one index\n> only. However, you can cluster on a multi-column index.\n>\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nRe: [PERFORM] cluster index on a table\n\n\nIf you have a lot of insert/update/delete activity on a table fillfactor can help.\n\nI don’t believe that postgres will try and maintain the table in the cluster order however.\n\n\nOn 7/15/09 8:04 AM, \"Ibrahim Harrani\" <[email protected]> wrote:\n\nHi,\n\nthanks for your suggestion.\nIs there any benefit of setting fillfactor to 70 or 80 on this table?\n\n\n\nOn Wed, Jun 24, 2009 at 8:42 PM, Scott Marlowe<[email protected]> wrote:\n> As another poster pointed out, you cluster on ONE index and one index\n> only.  However, you can cluster on a multi-column index.\n>\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 15 Jul 2009 17:33:30 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Is there any interest in adding that (continual/automatic cluster\norder maintenance) to a future release?\n\nOn Wed, Jul 15, 2009 at 8:33 PM, Scott Carey<[email protected]> wrote:\n> If you have a lot of insert/update/delete activity on a table fillfactor can\n> help.\n>\n> I don’t believe that postgres will try and maintain the table in the cluster\n> order however.\n>\n>\n> On 7/15/09 8:04 AM, \"Ibrahim Harrani\" <[email protected]> wrote:\n>\n> Hi,\n>\n> thanks for your suggestion.\n> Is there any benefit of setting fillfactor to 70 or 80 on this table?\n>\n>\n>\n> On Wed, Jun 24, 2009 at 8:42 PM, Scott Marlowe<[email protected]>\n> wrote:\n>> As another poster pointed out, you cluster on ONE index and one index\n>> only.  However, you can cluster on a multi-column index.\n>>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n", "msg_date": "Wed, 15 Jul 2009 22:17:27 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "On Wed, Jul 15, 2009 at 10:36 PM, Scott Marlowe <[email protected]>wrote:\n\n> I'd love to see it.\n\n\n +1 for index organized tables\n\n--Scott\n\nOn Wed, Jul 15, 2009 at 10:36 PM, Scott Marlowe <[email protected]> wrote:\nI'd love to see it.  +1 for index organized tables --Scott", "msg_date": "Thu, 16 Jul 2009 08:34:42 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Either true Index Organized Tables or Clustered Indexes would be very useful\nfor a variety of table/query types. The latter seems more difficult with\nPostgres' MVCC model since it requires data to be stored in the index that\nis authoritative.\n\nStoring the full tuple in an index and not even having a data only page\nwould also be an interesting approach to this (and perhaps simpler than a\nseparate index file and data file if trying to keep the data in the order of\nthe index).\n\n\n\nOn 7/15/09 7:17 PM, \"Justin Pitts\" <[email protected]> wrote:\n\n> Is there any interest in adding that (continual/automatic cluster\n> order maintenance) to a future release?\n> \n> On Wed, Jul 15, 2009 at 8:33 PM, Scott Carey<[email protected]> wrote:\n>> If you have a lot of insert/update/delete activity on a table fillfactor can\n>> help.\n>> \n>> I don¹t believe that postgres will try and maintain the table in the cluster\n>> order however.\n>> \n>> \n>> On 7/15/09 8:04 AM, \"Ibrahim Harrani\" <[email protected]> wrote:\n>> \n>> Hi,\n>> \n>> thanks for your suggestion.\n>> Is there any benefit of setting fillfactor to 70 or 80 on this table?\n>> \n>> \n>> \n>> On Wed, Jun 24, 2009 at 8:42 PM, Scott Marlowe<[email protected]>\n>> wrote:\n>>> As another poster pointed out, you cluster on ONE index and one index\n>>> only.  However, you can cluster on a multi-column index.\n>>> \n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n>> \n> \n\n", "msg_date": "Thu, 16 Jul 2009 10:35:15 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "ISTR that is the approach that MSSQL follows.\n\n>\n> Storing the full tuple in an index and not even having a data only \n> page\n> would also be an interesting approach to this (and perhaps simpler \n> than a\n> separate index file and data file if trying to keep the data in the \n> order of\n> the index).\n\n", "msg_date": "Thu, 16 Jul 2009 13:52:58 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "I could be wrong, but I think MSSQL only keeps the data specified in the\nindex in the index, and the remaining columns in the data.\nThat is, if there is a clustered index on a table on three columns out of\nfive, those three columns in the index are stored in the index, while the\nother two are in a data portion. But it has been several years since I\nworked with that DB.\n\nThey are certainly storing at least those columns in the index itself. And\nthat feature does work very well from a performance perspective.\n\nIOT in Oracle is a huge win in some cases, but a bit more clunky for others\nthan Clustered Indexes in MSSQL. Both are highly useful.\n\nOn 7/16/09 10:52 AM, \"Justin Pitts\" <[email protected]> wrote:\n\n> ISTR that is the approach that MSSQL follows.\n> \n>> \n>> Storing the full tuple in an index and not even having a data only\n>> page\n>> would also be an interesting approach to this (and perhaps simpler\n>> than a\n>> separate index file and data file if trying to keep the data in the\n>> order of\n>> the index).\n> \n> \n\n", "msg_date": "Thu, 16 Jul 2009 11:21:46 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "\nAccording to the books online http://msdn.microsoft.com/en-us/library/ms177443.aspx \n :\n\n\t\"In a clustered index, the leaf nodes contain the data pages of the \nunderlying table.\"\n\n\nWhich agrees with your assertion.\n\n From a performance perspective, it DOES work very well. Which is why \nI keep hoping for it to show up in PostgreSQL.\n\nOn Jul 16, 2009, at 2:21 PM, Scott Carey wrote:\n\n> I could be wrong, but I think MSSQL only keeps the data specified in \n> the\n> index in the index, and the remaining columns in the data.\n> That is, if there is a clustered index on a table on three columns \n> out of\n> five, those three columns in the index are stored in the index, \n> while the\n> other two are in a data portion. But it has been several years \n> since I\n> worked with that DB.\n>\n> They are certainly storing at least those columns in the index \n> itself. And\n> that feature does work very well from a performance perspective.\n>\n> IOT in Oracle is a huge win in some cases, but a bit more clunky for \n> others\n> than Clustered Indexes in MSSQL. Both are highly useful.\n>\n> On 7/16/09 10:52 AM, \"Justin Pitts\" <[email protected]> wrote:\n>\n>> ISTR that is the approach that MSSQL follows.\n>>\n>>>\n>>> Storing the full tuple in an index and not even having a data only\n>>> page\n>>> would also be an interesting approach to this (and perhaps simpler\n>>> than a\n>>> separate index file and data file if trying to keep the data in the\n>>> order of\n>>> the index).\n>>\n>>\n>\n\n", "msg_date": "Thu, 16 Jul 2009 14:35:50 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Scott Carey <[email protected]> wrote: \n> I could be wrong, but I think MSSQL only keeps the data specified in\n> the index in the index, and the remaining columns in the data.\n \nUnless it has changed recently, an MS SQL Server clustered index is\nthe same as the Sybase implementation: all data for the tuple is\nstored in the leaf page of the clustered index. There is no separate\nheap. The indid in sysindexes is part of the clue -- a table has\neither one 0 entry for the heap (if there is no clustered index) or\none 1 entry for the clustered index. \"Normal\" indexes have indid of 2\nthrough 254, and indid 255 is reserved for out-of-line storage of text\nand image data.\n \n-Kevin\n", "msg_date": "Thu, 16 Jul 2009 14:15:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Yes, it seems as though the whole tuple is entirely in the index if it is\nclustered. \n\n>From :\nhttp://msdn.microsoft.com/en-us/library/ms177484.aspx\n\n\" Each index row in the nonclustered index contains the nonclustered key\nvalue and a row locator. This locator points to the data row in the\nclustered index or heap having the key value.\"\n\nThat sort of model should work with MVCC and even HOT with the same\nrestrictions that HOT has now.\n\nOn 7/16/09 11:35 AM, \"Justin Pitts\" <[email protected]> wrote:\n\n> \n> \n> According to the books online\n> http://msdn.microsoft.com/en-us/library/ms177443.aspx\n> :\n> \n> \"In a clustered index, the leaf nodes contain the data pages of the\n> underlying table.\"\n> \n> \n> Which agrees with your assertion.\n> \n> From a performance perspective, it DOES work very well. Which is why\n> I keep hoping for it to show up in PostgreSQL.\n> \n> On Jul 16, 2009, at 2:21 PM, Scott Carey wrote:\n> \n>> I could be wrong, but I think MSSQL only keeps the data specified in\n>> the\n>> index in the index, and the remaining columns in the data.\n>> That is, if there is a clustered index on a table on three columns\n>> out of\n>> five, those three columns in the index are stored in the index,\n>> while the\n>> other two are in a data portion. But it has been several years\n>> since I\n>> worked with that DB.\n>> \n>> They are certainly storing at least those columns in the index\n>> itself. And\n>> that feature does work very well from a performance perspective.\n>> \n>> IOT in Oracle is a huge win in some cases, but a bit more clunky for\n>> others\n>> than Clustered Indexes in MSSQL. Both are highly useful.\n>> \n>> On 7/16/09 10:52 AM, \"Justin Pitts\" <[email protected]> wrote:\n>> \n>>> ISTR that is the approach that MSSQL follows.\n>>> \n>>>> \n>>>> Storing the full tuple in an index and not even having a data only\n>>>> page\n>>>> would also be an interesting approach to this (and perhaps simpler\n>>>> than a\n>>>> separate index file and data file if trying to keep the data in the\n>>>> order of\n>>>> the index).\n>>> \n>>> \n>> \n> \n> \n\n", "msg_date": "Thu, 16 Jul 2009 12:18:30 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "On Thu, Jul 16, 2009 at 8:18 PM, Scott Carey<[email protected]> wrote:\n> \" Each index row in the nonclustered index contains the nonclustered key\n> value and a row locator. This locator points to the data row in the\n> clustered index or heap having the key value.\"\n>\n> That sort of model should work with MVCC and even HOT with the same\n> restrictions that HOT has now.\n\n\nThe problem with this is that btree indexes need to be able to split\npages. In which case your tuple's tid changes and all hell breaks\nloose. One of the fundamental design assumptions in our MVCC design is\nthat you can trust a tuple to stay where it is as long as it's visible\nto your transaction.\n\nFor example you may want to go back and check the discussion on\ngetting vacuum to do a sequential scan of indexes. The solution we\nfound for that only works because only a single vacuum can be scanning\nthe index at a time.\n\nAnother scenario to think about, picture yourself in the middle of a\nnested loop processing all the matches for a tuple in the outer\nrelation. Now someone else comes along and wants to insert a new tuple\non the same page as that outer tuple and has to split the page. How do\nyou do that without messing up the nested loop which may not come back\nto that page for many minutes?\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Thu, 16 Jul 2009 20:46:00 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "\n\n\nOn 7/16/09 12:46 PM, \"Greg Stark\" <[email protected]> wrote:\n\n> On Thu, Jul 16, 2009 at 8:18 PM, Scott Carey<[email protected]> wrote:\n>> \" Each index row in the nonclustered index contains the nonclustered key\n>> value and a row locator. This locator points to the data row in the\n>> clustered index or heap having the key value.\"\n>> \n>> That sort of model should work with MVCC and even HOT with the same\n>> restrictions that HOT has now.\n> \n> \n> The problem with this is that btree indexes need to be able to split\n> pages. In which case your tuple's tid changes and all hell breaks\n> loose. One of the fundamental design assumptions in our MVCC design is\n> that you can trust a tuple to stay where it is as long as it's visible\n> to your transaction.\n> \n> For example you may want to go back and check the discussion on\n> getting vacuum to do a sequential scan of indexes. The solution we\n> found for that only works because only a single vacuum can be scanning\n> the index at a time.\n> \n> Another scenario to think about, picture yourself in the middle of a\n> nested loop processing all the matches for a tuple in the outer\n> relation. Now someone else comes along and wants to insert a new tuple\n> on the same page as that outer tuple and has to split the page. How do\n> you do that without messing up the nested loop which may not come back\n> to that page for many minutes?\n> \n\nKeep the old page around or a copy of it that old transactions reference?\nJust more Copy on Write.\nHow is that different from a nested loop on an index scan/seek currently?\nDoesn't an old transaction have to reference an old heap page through an\nindex with the current implementation? At least, the index references\nmultiple versions and their visibility must be checked. Can a similar\nsolution work here? Just reference the pre and post split pages and filter\nby visibility?\n\n> \n> --\n> greg\n> http://mit.edu/~gsstark/resume.pdf\n> \n\n", "msg_date": "Thu, 16 Jul 2009 13:06:10 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "On Thu, Jul 16, 2009 at 9:06 PM, Scott Carey<[email protected]> wrote:\n> Keep the old page around or a copy of it that old transactions reference?\n> Just more Copy on Write.\n> How is that different from a nested loop on an index scan/seek currently?\n> Doesn't an old transaction have to reference an old heap page through an\n> index with the current implementation?  At least, the index references\n> multiple versions and their visibility must be checked.  Can a similar\n> solution work here?  Just reference the pre and post split pages and filter\n> by visibility?\n\nThat would be a completely different architecture than we have now.\nWe're not Oracle, Postgres does all this at the tuple level, not at\nthe page level. We have tuple versions, not page versions, and tuple\nlocks, not page locks.\n\nThe old transaction has to reference a heap page through an index with\nthe current implementation. But it can do so safely precisely because\nthe tuple will be where the index references it as long as necessary.\nAs long as that old transaction is live it's guaranteed not to be\nremoved by vacuum (well... except by VACUUM FULL but that's a whole\nnother story).\n\nActually this is probably the clearest problem with IOT in the\nPostgres universe. What do other indexes use to reference these rows\nif they can move around?\n\nI wanted to call Heikki's \"grouped index item\" patch that he worked on\nfor so long index organized tables. Basically that's what they are\nexcept the leaf tuples are stored in the regular heap like always,\nhopefully in index order. And there are no leaf tuples in the index so\nthe actual index is much much smaller. It doesn't look like a\ntraditional IOT but it behaves a lot like one in the space savings and\naccess patterns.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Thu, 16 Jul 2009 21:49:26 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Hi Scott,\n\nWhich fillfactor is better 70, 80 or another value?\n\nThanks.\n\nThanks\n\nOn Thu, Jul 16, 2009 at 3:33 AM, Scott Carey<[email protected]> wrote:\n> If you have a lot of insert/update/delete activity on a table fillfactor can\n> help.\n>\n> I don’t believe that postgres will try and maintain the table in the cluster\n> order however.\n>\n>\n> On 7/15/09 8:04 AM, \"Ibrahim Harrani\" <[email protected]> wrote:\n>\n> Hi,\n>\n> thanks for your suggestion.\n> Is there any benefit of setting fillfactor to 70 or 80 on this table?\n>\n>\n>\n> On Wed, Jun 24, 2009 at 8:42 PM, Scott Marlowe<[email protected]>\n> wrote:\n>> As another poster pointed out, you cluster on ONE index and one index\n>> only.  However, you can cluster on a multi-column index.\n>>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n", "msg_date": "Fri, 17 Jul 2009 00:16:10 +0300", "msg_from": "Ibrahim Harrani <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "\nOn 7/16/09 1:49 PM, \"Greg Stark\" <[email protected]> wrote:\n\n> On Thu, Jul 16, 2009 at 9:06 PM, Scott Carey<[email protected]> wrote:\n>> Keep the old page around or a copy of it that old transactions reference?\n>> Just more Copy on Write.\n>> How is that different from a nested loop on an index scan/seek currently?\n>> Doesn't an old transaction have to reference an old heap page through an\n>> index with the current implementation?  At least, the index references\n>> multiple versions and their visibility must be checked.  Can a similar\n>> solution work here?  Just reference the pre and post split pages and filter\n>> by visibility?\n> \n> That would be a completely different architecture than we have now.\n> We're not Oracle, Postgres does all this at the tuple level, not at\n> the page level. We have tuple versions, not page versions, and tuple\n> locks, not page locks.\n\nYes, that is a little more tricky, but I still don't see why that is an\nissue. The Copy on Write I was referring to is at the tuple level.\n\n> \n> The old transaction has to reference a heap page through an index with\n> the current implementation.\n\nAnd so it could have a reference to an clustered index leaf page with tuples\nin it through an index. Essentially, clustered index leaf pages are a\nspecial type of heap-like page.\n\n> But it can do so safely precisely because\n> the tuple will be where the index references it as long as necessary.\n\nCan't the same guarantee be made for clustered index data? When one splits,\nkeep the old one around as long as necessary.\n\n> As long as that old transaction is live it's guaranteed not to be\n> removed by vacuum (well... except by VACUUM FULL but that's a whole\n> nother story).\n> \n> Actually this is probably the clearest problem with IOT in the\n> Postgres universe. What do other indexes use to reference these rows\n> if they can move around?\n\n\nIndexes would point to a heap page for normal tables and clustered index\npages for clustered tables. When new versions of data come in, it may point\nto new clustered index pages, just like they currently get modified to point\nto new heap pages with new data. A split just means more data is modified.\nAnd if multiple versions are still 'live' they point to multiple versions --\njust as they now have to with the ordinary heap pages. Page splitting only\nmeans that there might be some copies of row versions with the same tid due\nto a split, but labeling a page with a 'creation' tid to differentiate the\npre and post split pages removes the ambiguity. I suppose that would\nrequire some page locking.\n\n> \n> I wanted to call Heikki's \"grouped index item\" patch that he worked on\n> for so long index organized tables. Basically that's what they are\n> except the leaf tuples are stored in the regular heap like always,\n> hopefully in index order. And there are no leaf tuples in the index so\n> the actual index is much much smaller. It doesn't look like a\n> traditional IOT but it behaves a lot like one in the space savings and\n> access patterns.\n\nSounds like its a best-effort to maintain the heap in the index order, using\nFILLFACTOR and dead space to hopefully keep it in roughly the right order.\nThat is an incremental improvement on the current situation.\n\nFor clustered indexes, clustered index pages would need to be treated like\nthe current heap pages and tuples with respect to copy on write. Yes, that\nis more than a trivial modification of the current heap or index page types\nand their MVCC semantics. Its definitely a third type of page.\n\nAm I missing something?\n\n> \n> --\n> greg\n> http://mit.edu/~gsstark/resume.pdf\n> \n\n", "msg_date": "Thu, 16 Jul 2009 17:02:18 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "On Fri, Jul 17, 2009 at 1:02 AM, Scott Carey<[email protected]> wrote:\n> Indexes would point to a heap page for normal tables and clustered index\n> pages for clustered tables.  When new versions of data come in, it may point\n> to new clustered index pages, just like they currently get modified to point\n> to new heap pages with new data.  A split just means more data is modified.\n> And if multiple versions are still 'live' they point to multiple versions --\n> just as they now have to with the ordinary heap pages.  Page splitting only\n> means that there might be some copies of row versions with the same tid due\n> to a split, but labeling a page with a 'creation' tid to differentiate the\n> pre and post split pages removes the ambiguity.  I suppose that would\n> require some page locking.\n\n\nNone of this makes much sense to me.\n\nIf you keep the old tuples around when you split a full page where are\nyou going to put the new tuples you're inserting?\n\nAlso, are you clear what a tid is? It's the block number plus the\nindex to the tuple entry. If you split the page and move half the\ntuples to the new page they'll all have different tids.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 17 Jul 2009 01:27:35 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "\nOn 7/16/09 5:27 PM, \"Greg Stark\" <[email protected]> wrote:\n\n> On Fri, Jul 17, 2009 at 1:02 AM, Scott Carey<[email protected]> wrote:\n>> Indexes would point to a heap page for normal tables and clustered index\n>> pages for clustered tables.  When new versions of data come in, it may point\n>> to new clustered index pages, just like they currently get modified to point\n>> to new heap pages with new data.  A split just means more data is modified.\n>> And if multiple versions are still 'live' they point to multiple versions --\n>> just as they now have to with the ordinary heap pages.  Page splitting only\n>> means that there might be some copies of row versions with the same tid due\n>> to a split, but labeling a page with a 'creation' tid to differentiate the\n>> pre and post split pages removes the ambiguity.  I suppose that would\n>> require some page locking.\n> \n> \n> None of this makes much sense to me.\n> \n> If you keep the old tuples around when you split a full page where are\n> you going to put the new tuples you're inserting?\n\nThe new page?\nSimplest solution - split into two or more new pages and copy. Now there\nare two copies of all the old stuff, and one copy of the new tuples (in the\nnew pages).\nOne could optimize it to only create one new page and copy half the tuples,\ndepending on where the inserted ones would go, but its probably not a big\nwin and makes it more complicated.\n\nIn the current scheme, when an update occurs and the HOT feature is not\napplicable, there is a copy-on-write for that tuple, and AFAICS all the\nindexes must point to both those tuples as long as the previous one is still\nvisible -- so there are two or more index references for the same tuple at\ndifferent versions.\n\nIn the case I am describing, all the tuples in the page that is split would\nnow have multiple entries in all indexes (which isn't a cheap thing to do).\nAn old transaction can see all the index references, but either the tuples\nor page could be marked to track visibility and remove the ambiguity. If\nthe transaction id/number ('spit id'?) that caused the split and created the\nnew pages is stored in the new pages, all ambiguity can be resolved for a\nunique index. With a non-unique index there would have to be some tuple\nlevel visibility check or store more information in the index (how does it\nwork now?).\n\n> \n> Also, are you clear what a tid is? It's the block number plus the\n> index to the tuple entry. If you split the page and move half the\n> tuples to the new page they'll all have different tids.\n> \n\nI was thinking tid = transaction id based on the context. A version number\nof sorts -- the one that each tuple is marked with for visibility. But its\nessentially the pointer to the tuple. My mistake.\n\nAll tuples in a split page will likely have to be copied, and it will have\nto be such that both copies exist until it can be sure the old copy is no\nlonger visible. Its essentially the same as the copy on write now but\naffects more than the tuple that is updated -- it affects all in the page\nwhen there is a split.\nFor the actual index with the embedded tuples, that's not so bad since all\nthe references to these are co-located. For other indexes, its an expensive\nupdate. \n\nBut, that is what FILLFACTOR and the like are for -- Making sure splits\noccur less often, and perhaps even reserving some empty pages in the index\nat creation time for future splits. Maintenance and bloat is the tradeoff\nwith these things, but the performance payoff for many workloads is huge.\n\nAm I missing something else? Is it not possible to move tuples that aren't\nbeing updated without something like an exclusive table lock?\n\nI'm definitely not saying that the above would be easy, I just don't yet see\nhow it is incompatible with Postgres' MVCC. Hard? Probably.\n\n> --\n> greg\n> http://mit.edu/~gsstark/resume.pdf\n> \n\n", "msg_date": "Thu, 16 Jul 2009 20:07:43 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" } ]