threads
listlengths
1
275
[ { "msg_contents": "I've got a normalized data table from which I'm trying to select a small\nsubset of rows determined by both traditional filtering as well as the\nresult of a cpu-expensive function that I've defined. What I'm seeing\nis that the query planner always attempts to defer the de-normalizing\nJOIN over the function which causes the query to be much slower than it\nwould be if the JOIN were performed (for filtering) before the function\nis run on the rows.\n\nIs there any way for me to influence the query planner so that it can\nknow that the JOIN is far less expensive than the function for planning?\nThe COST attribute on the function appears to have no effect.\n\nI'm testing on:\n PostgreSQL 9.2.4 on amd64-portbld-freebsd9.1, compiled by cc (GCC)\n 4.2.1 20070831 patched [FreeBSD], 64-bit\n\nHere's a synthetic example which demonstrates the issue. A very simple\ntable with normalized codes in a secondary table.\n\n CREATE TABLE codes (\n code_id integer NOT NULL,\n code varchar NOT NULL,\n PRIMARY KEY(code_id)\n );\n\n INSERT INTO codes(code_id,code) SELECT 1,'one';\n INSERT INTO codes(code_id,code) SELECT 2,'two';\n INSERT INTO codes(code_id,code) SELECT 3,'three';\n INSERT INTO codes(code_id,code) SELECT 4,'four';\n INSERT INTO codes(code_id,code) SELECT 5,'five';\n\n CREATE TABLE examples (\n example_id serial NOT NULL,\n code_id integer NOT NULL REFERENCES codes(code_id),\n value varchar,\n PRIMARY KEY(example_id)\n );\n\n INSERT INTO examples (code_id,value) SELECT 1,'een';\n INSERT INTO examples (code_id,value) SELECT 2,'koe';\n INSERT INTO examples (code_id,value) SELECT 3,'doet';\n INSERT INTO examples (code_id,value) SELECT 4,'boe';\n\nAnd a de-normalizing view for access:\n\n CREATE VIEW examples_view AS\n SELECT e.*,c.code FROM examples e LEFT JOIN codes c USING (code_id);\n\n\nAnd a user-defined function which is painfully slow to run:\n \n CREATE FUNCTION painfully_slow_function(id integer,value varchar) RETURNS boolean AS $$\n BEGIN\n RAISE NOTICE 'Processing ID % (%)',id,value;\n PERFORM pg_sleep(10);\n RETURN TRUE;\n END;\n$$ LANGUAGE plpgsql;\n\nA simple SELECT not trying to de-normalize the data, only involving the basa\ntable does what we'd hope. Note that the function is only run on the code_id matching row because the planner rightly filters on that first:\n\n[nugget@[local]|costtest] > EXPLAIN (ANALYZE,BUFFERS) SELECT * FROM examples WHERE code_id = 3 AND painfully_slow_function(example_id,value) IS TRUE;\nNOTICE: Processing ID 3 (doet)\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------\n Seq Scan on examples (cost=0.00..314.50 rows=2 width=40) (actual time=10010.925..10010.929 rows=1 loops=1)\n Filter: ((code_id = 3) AND (painfully_slow_function(example_id, value) IS TRUE))\n Rows Removed by Filter: 3\n Buffers: shared hit=2 read=2\n Total runtime: 10010.948 ms\n(5 rows)\n\nTime: 10011.328 ms\n\nHowever, if the SELECT instead uses the VIEW which de-normalizes the data \nthe query planner defers the join and the result is running the function on\nall rows in the table:\n\n[nugget@[local]|costtest] > EXPLAIN (ANALYZE,BUFFERS) SELECT * FROM examples_view WHERE code = 'three' AND painfully_slow_function(example_id,value) IS TRUE;\nNOTICE: Processing ID 1 (een)\nNOTICE: Processing ID 2 (koe)\nNOTICE: Processing ID 3 (doet)\nNOTICE: Processing ID 4 (boe)\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=25.45..338.52 rows=2 width=72) (actual time=30053.776..40073.772 rows=1 loops=1)\n Hash Cond: (e.code_id = c.code_id)\n Buffers: shared hit=2\n -> Seq Scan on examples e (cost=0.00..311.60 rows=387 width=40) (actual time=10013.765..40073.708 rows=4 loops=1)\n Filter: (painfully_slow_function(example_id, value) IS TRUE)\n Buffers: shared hit=1\n -> Hash (cost=25.38..25.38 rows=6 width=36) (actual time=0.019..0.019 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n Buffers: shared hit=1\n -> Seq Scan on codes c (cost=0.00..25.38 rows=6 width=36) (actual time=0.013..0.014 rows=1 loops=1)\n Filter: ((code)::text = 'three'::text)\n Rows Removed by Filter: 4\n Buffers: shared hit=1\n Total runtime: 40073.813 ms\n(14 rows)\n\nTime: 40074.363 ms\n\n\nEven if I juke the COST on the function to crank it up to a ridiculous execution cost, the query planner doesn't seem to change ( also at http://explain.depesz.com/s/WEh ):\n\n[nugget@[local]|costtest] > ALTER FUNCTION painfully_slow_function(integer,varchar) COST 2147483647;\nALTER FUNCTION\nTime: 1.637 ms\n[nugget@[local]|costtest] > EXPLAIN (ANALYZE,BUFFERS) SELECT * FROM examples_view WHERE code = 'three' AND painfully_slow_function(example_id,value) IS TRUE;\nNOTICE: Processing ID 1 (een)\nNOTICE: Processing ID 2 (koe)\nNOTICE: Processing ID 3 (doet)\nNOTICE: Processing ID 4 (boe)\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..6227702661.02 rows=2 width=72) (actual time=30056.425..40076.436 rows=1 loops=1)\n Join Filter: (e.code_id = c.code_id)\n Rows Removed by Join Filter: 3\n Buffers: shared hit=5\n -> Seq Scan on examples e (cost=0.00..6227702600.80 rows=387 width=40) (actual time=10016.458..40076.370 rows=4 loops=1)\n Filter: (painfully_slow_function(example_id, value) IS TRUE)\n Buffers: shared hit=4\n -> Materialize (cost=0.00..25.41 rows=6 width=36) (actual time=0.005..0.007 rows=1 loops=4)\n Buffers: shared hit=1\n -> Seq Scan on codes c (cost=0.00..25.38 rows=6 width=36) (actual time=0.011..0.012 rows=1 loops=1)\n Filter: ((code)::text = 'three'::text)\n Rows Removed by Filter: 4\n Buffers: shared hit=1\n Total runtime: 40076.475 ms\n(14 rows)\n\nIs there any COST level where PostgreSQL will properly determine that the\njoin is less expensive than the function? Or, is there another knob that I\ncan turn which will influence the query planner in this way?\n\nThanks! I hope I've just missed something obvious in the documentation.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 20 Aug 2013 13:06:51 -0500", "msg_from": "David McNett <[email protected]>", "msg_from_op": true, "msg_subject": "Can query planner prefer a JOIN over a high-cost Function?" }, { "msg_contents": "David McNett <[email protected]> writes:\n> Is there any way for me to influence the query planner so that it can\n> know that the JOIN is far less expensive than the function for planning?\n> The COST attribute on the function appears to have no effect.\n\nI think what you're missing is an index on examples.code_id, which\nwould allow for a plan like this one:\n\n Nested Loop (cost=154.41..205263.18 rows=2185 width=16)\n -> Seq Scan on codes c (cost=0.00..1.06 rows=1 width=8)\n Filter: ((code)::text = 'three'::text)\n -> Bitmap Heap Scan on examples e (cost=154.41..205234.81 rows=2731 width=1\n2)\n Recheck Cond: (code_id = c.code_id)\n Filter: (painfully_slow_function(example_id, value) IS TRUE)\n -> Bitmap Index Scan on examples_code_id_idx (cost=0.00..153.73 rows=\n8192 width=0)\n Index Cond: (code_id = c.code_id)\n\nIf you really want to force the join to occur separately, you could\nprobably do something involving a sub-select with OFFSET 0, but I wouldn't\nrecommend pursuing that path unless you can't get a decent result without\ncontorting the query.\n\nAnother thing worth thinking about is whether you could precalculate the\nexpensive function via a functional index. It'd have to be immutable,\nbut if it is, this is a useful way of changing the ground rules.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 20 Aug 2013 15:52:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can query planner prefer a JOIN over a high-cost Function?" } ]
[ { "msg_contents": "Hi,\nI am running PostgreSQL 9.2.4 on windows 8 , 64 bit operating system , 4GB RAM.A laptop with i3 - 3110M , 2.4 GHZ . The database came bundled with wapp stack 5.4.17-0. We have an php application that serves data from PostgreSQL 9.2.4.\nThe configuration runs with very good performance (3 sec response php + db ) on windows 7 32, 64 bit OS , 4GB RAM (desktops with i3-2100 3.10 GHZ ) .But take around 25 seconds to render on windows 8 , the laptop. \nI was able to eliminate php , as the performance was as expected. (without DB calls)On the other part the database calls take more than 100 ms for simple queries (Example a table with just 10 row sometimes takes around 126 ms). This information i was able to collect from the pg_log.\nThe php pages have multiple queries in them, a single query works as expected, but running multiple queries in the page causes the db performance to go down. Please note this setup is working fine (3 sec overall including php ) on all windows 7 32, 64 bit OS , desktops.\nAppreciate help in giving me an direction on how to get to the issue.The db size is 11mb only. Most of the tables have less than 100 rows with appropriate indexes. Some tables have more than 1000 rows , are not queried in the php pages . The super user login is used from php . (Changing super user reserved connections did not help, tried changing shared _buffers and other setting , none of the setting seem to have any effect on the db performance )\nFollowing are the variable settings that works fine on on all windows 7 32, 64 bit OS , desktops. \n\n NameSettingallow_system_table_modsoffapplication_namearchive_command(disabled)archive_modeoffarchive_timeout0array_nullsonauthentication_timeout1minautovacuumonautovacuum_analyze_scale_factor0.1autovacuum_analyze_threshold50autovacuum_freeze_max_age200000000autovacuum_max_workers3autovacuum_naptime1minautovacuum_vacuum_cost_delay20msautovacuum_vacuum_cost_limit-1autovacuum_vacuum_scale_factor0.2autovacuum_vacuum_threshold50backslash_quotesafe_encodingbgwriter_delay200msbgwriter_lru_maxpages100bgwriter_lru_multiplier2block_size8192bonjouroffbonjour_namebytea_outputescapecheck_function_bodiesoncheckpoint_completion_target0.5checkpoint_segments3checkpoint_timeout5mincheckpoint_warning30sclient_encodingUTF8client_min_messagesnoticecommit_delay0commit_siblings5config_fileC:/xxxx~2/POSTGR~1/data/postgresql.confconstraint_exclusionpartitioncpu_index_tuple_cost0.005cpu_operator_cost0.0025cpu_tuple_cost0.01cursor_tuple_fraction0.1data_directoryC:/xxx~2/POSTGR~1/dataDateStyleISO, MDYdb_user_namespaceoffdeadlock_timeout1sdebug_assertionsoffdebug_pretty_printondebug_print_parseoffdebug_print_planoffdebug_print_rewrittenoffdefault_statistics_target100default_tablespacedefault_text_search_configpg_catalog.englishdefault_transaction_deferrableoffdefault_transaction_isolationread committeddefault_transaction_read_onlyoffdefault_with_oidsoffdynamic_library_path$libdireffective_cache_size128MBeffective_io_concurrency0enable_bitmapscanonenable_hashaggonenable_hashjoinonenable_indexonlyscanonenable_indexscanonenable_materialonenable_mergejoinonenable_nestlooponenable_seqscanonenable_sortonenable_tidscanonescape_string_warningonevent_sourcePostgreSQLexit_on_erroroffexternal_pid_fileextra_float_digits0from_collapse_limit8fsynconfull_page_writesongeqoongeqo_effort5geqo_generations0geqo_pool_size0geqo_seed0geqo_selection_bias2geqo_threshold12gin_fuzzy_search_limit0hba_fileC:/xxxx~2/POSTGR~1/data/pg_hba.confhot_standbyoffhot_standby_feedbackoffident_fileC:/xxxx~2/POSTGR~1/data/pg_ident.confignore_system_indexesoffinteger_datetimesonIntervalStylepostgresjoin_collapse_limit8krb_caseins_usersoffkrb_server_keyfilekrb_srvnamepostgreslc_collateEnglish_United States.1252lc_ctypeEnglish_United States.1252lc_messagesEnglish_United States.1252lc_monetaryEnglish_United States.1252lc_numericEnglish_United States.1252lc_timeEnglish_United States.1252listen_addresses127.0.0.1lo_compat_privilegesofflocal_preload_librarieslog_autovacuum_min_duration-1log_checkpointsofflog_connectionsofflog_destinationstderrlog_directorypg_loglog_disconnectionsofflog_durationofflog_error_verbositydefaultlog_executor_statsofflog_file_mode0600log_filenamepostgresql-%Y-%m-%d_%H%M%S.loglog_hostnameofflog_line_prefixlog_lock_waitsofflog_min_duration_statement-1log_min_error_statementerrorlog_min_messageswarninglog_parser_statsofflog_planner_statsofflog_rotation_age1dlog_rotation_size10MBlog_statementnonelog_statement_statsofflog_temp_files-1log_timezoneAsia/Calcuttalog_truncate_on_rotationofflogging_collectoronmaintenance_work_mem16MBmax_connections100max_files_per_process1000max_function_args100max_identifier_length63max_index_keys32max_locks_per_transaction64max_pred_locks_per_transaction64max_prepared_transactions0max_stack_depth2MBmax_standby_archive_delay30smax_standby_streaming_delay30smax_wal_senders0password_encryptiononport5432post_auth_delay0pre_auth_delay0quote_all_identifiersoffrandom_page_cost4replication_timeout1minrestart_after_crashonsearch_path\"$user\",viplsegment_size1GBseq_page_cost1server_encodingUTF8server_version9.2.4server_version_num90204session_replication_roleoriginshared_buffers1GBshared_preload_librariessql_inheritanceonssloffssl_ca_filessl_cert_fileserver.crtssl_ciphersALL:!ADH:!LOW:!EXP:!MD5:@STRENGTHssl_crl_filessl_key_fileserver.keyssl_renegotiation_limit512MBstandard_conforming_stringsonstatement_timeout0stats_temp_directorypg_stat_tmpsuperuser_reserved_connections3synchronize_seqscansonsynchronous_commitonsynchronous_standby_namessyslog_facilitynonesyslog_identpostgrestcp_keepalives_count0tcp_keepalives_idle-1tcp_keepalives_interval-1temp_buffers16MBtemp_file_limit-1temp_tablespacesTimeZoneAsia/Calcuttatimezone_abbreviationsDefaulttrace_notifyofftrace_recovery_messageslogtrace_sortofftrack_activitiesontrack_activity_query_size1024track_countsontrack_functionsnonetrack_io_timingofftransaction_deferrableofftransaction_isolationread committedtransaction_read_onlyofftransform_null_equalsoffunix_socket_directoryunix_socket_groupunix_socket_permissions0777update_process_titleonvacuum_cost_delay0vacuum_cost_limit200vacuum_cost_page_dirty20vacuum_cost_page_hit1vacuum_cost_page_miss10vacuum_defer_cleanup_age0vacuum_freeze_min_age50000000vacuum_freeze_table_age150000000wal_block_size8192wal_buffers16MBwal_keep_segments0wal_levelminimalwal_receiver_status_interval10swal_segment_size16MBwal_sync_methodopen_datasyncwal_writer_delay200mswork_mem512MBxmlbinarybase64xmloptioncontentzero_damaged_pagesoff\n\n\n\n\nThanksGirish Subbaramu.\n \t\t \t \t\t \n\n\n\nHi,I am running PostgreSQL 9.2.4 on windows 8  , 64 bit operating system , 4GB RAM.A laptop with i3 - 3110M , 2.4 GHZ . The database  came bundled with wapp stack 5.4.17-0. We have an php application that serves data from PostgreSQL 9.2.4.The configuration runs with very good performance (3 sec response php + db ) on windows 7   32, 64 bit OS , 4GB RAM (desktops with i3-2100 3.10 GHZ ) .But take around 25 seconds to render on windows 8 , the laptop. I was able to eliminate php , as the performance was as expected. (without DB calls)On the other part the database calls take more than 100 ms for simple queries (Example a table with just 10 row sometimes takes around 126 ms).  This information i was able to collect from the pg_log.The php pages have multiple queries in them, a single query works as expected, but running multiple queries in the page causes the db performance to go down. Please note this setup is working fine (3 sec  overall including php ) on all  windows 7   32, 64 bit OS , desktops.Appreciate help in giving me an direction on how to get to the issue.The db size is 11mb only. Most of the tables have less than 100 rows with appropriate indexes. Some tables have more than 1000 rows , are not queried  in the php pages . The super user login is used from php . (Changing super user reserved connections did not help, tried changing shared _buffers and other setting , none of the setting seem to have any effect on the db performance )Following are the variable settings that works fine on  on all  windows 7   32, 64 bit OS , desktops.  NameSettingallow_system_table_modsoffapplication_namearchive_command(disabled)archive_modeoffarchive_timeout0array_nullsonauthentication_timeout1minautovacuumonautovacuum_analyze_scale_factor0.1autovacuum_analyze_threshold50autovacuum_freeze_max_age200000000autovacuum_max_workers3autovacuum_naptime1minautovacuum_vacuum_cost_delay20msautovacuum_vacuum_cost_limit-1autovacuum_vacuum_scale_factor0.2autovacuum_vacuum_threshold50backslash_quotesafe_encodingbgwriter_delay200msbgwriter_lru_maxpages100bgwriter_lru_multiplier2block_size8192bonjouroffbonjour_namebytea_outputescapecheck_function_bodiesoncheckpoint_completion_target0.5checkpoint_segments3checkpoint_timeout5mincheckpoint_warning30sclient_encodingUTF8client_min_messagesnoticecommit_delay0commit_siblings5config_fileC:/xxxx~2/POSTGR~1/data/postgresql.confconstraint_exclusionpartitioncpu_index_tuple_cost0.005cpu_operator_cost0.0025cpu_tuple_cost0.01cursor_tuple_fraction0.1data_directoryC:/xxx~2/POSTGR~1/dataDateStyleISO, MDYdb_user_namespaceoffdeadlock_timeout1sdebug_assertionsoffdebug_pretty_printondebug_print_parseoffdebug_print_planoffdebug_print_rewrittenoffdefault_statistics_target100default_tablespacedefault_text_search_configpg_catalog.englishdefault_transaction_deferrableoffdefault_transaction_isolationread committeddefault_transaction_read_onlyoffdefault_with_oidsoffdynamic_library_path$libdireffective_cache_size128MBeffective_io_concurrency0enable_bitmapscanonenable_hashaggonenable_hashjoinonenable_indexonlyscanonenable_indexscanonenable_materialonenable_mergejoinonenable_nestlooponenable_seqscanonenable_sortonenable_tidscanonescape_string_warningonevent_sourcePostgreSQLexit_on_erroroffexternal_pid_fileextra_float_digits0from_collapse_limit8fsynconfull_page_writesongeqoongeqo_effort5geqo_generations0geqo_pool_size0geqo_seed0geqo_selection_bias2geqo_threshold12gin_fuzzy_search_limit0hba_fileC:/xxxx~2/POSTGR~1/data/pg_hba.confhot_standbyoffhot_standby_feedbackoffident_fileC:/xxxx~2/POSTGR~1/data/pg_ident.confignore_system_indexesoffinteger_datetimesonIntervalStylepostgresjoin_collapse_limit8krb_caseins_usersoffkrb_server_keyfilekrb_srvnamepostgreslc_collateEnglish_United States.1252lc_ctypeEnglish_United States.1252lc_messagesEnglish_United States.1252lc_monetaryEnglish_United States.1252lc_numericEnglish_United States.1252lc_timeEnglish_United States.1252listen_addresses127.0.0.1lo_compat_privilegesofflocal_preload_librarieslog_autovacuum_min_duration-1log_checkpointsofflog_connectionsofflog_destinationstderrlog_directorypg_loglog_disconnectionsofflog_durationofflog_error_verbositydefaultlog_executor_statsofflog_file_mode0600log_filenamepostgresql-%Y-%m-%d_%H%M%S.loglog_hostnameofflog_line_prefixlog_lock_waitsofflog_min_duration_statement-1log_min_error_statementerrorlog_min_messageswarninglog_parser_statsofflog_planner_statsofflog_rotation_age1dlog_rotation_size10MBlog_statementnonelog_statement_statsofflog_temp_files-1log_timezoneAsia/Calcuttalog_truncate_on_rotationofflogging_collectoronmaintenance_work_mem16MBmax_connections100max_files_per_process1000max_function_args100max_identifier_length63max_index_keys32max_locks_per_transaction64max_pred_locks_per_transaction64max_prepared_transactions0max_stack_depth2MBmax_standby_archive_delay30smax_standby_streaming_delay30smax_wal_senders0password_encryptiononport5432post_auth_delay0pre_auth_delay0quote_all_identifiersoffrandom_page_cost4replication_timeout1minrestart_after_crashonsearch_path\"$user\",viplsegment_size1GBseq_page_cost1server_encodingUTF8server_version9.2.4server_version_num90204session_replication_roleoriginshared_buffers1GBshared_preload_librariessql_inheritanceonssloffssl_ca_filessl_cert_fileserver.crtssl_ciphersALL:!ADH:!LOW:!EXP:!MD5:@STRENGTHssl_crl_filessl_key_fileserver.keyssl_renegotiation_limit512MBstandard_conforming_stringsonstatement_timeout0stats_temp_directorypg_stat_tmpsuperuser_reserved_connections3synchronize_seqscansonsynchronous_commitonsynchronous_standby_namessyslog_facilitynonesyslog_identpostgrestcp_keepalives_count0tcp_keepalives_idle-1tcp_keepalives_interval-1temp_buffers16MBtemp_file_limit-1temp_tablespacesTimeZoneAsia/Calcuttatimezone_abbreviationsDefaulttrace_notifyofftrace_recovery_messageslogtrace_sortofftrack_activitiesontrack_activity_query_size1024track_countsontrack_functionsnonetrack_io_timingofftransaction_deferrableofftransaction_isolationread committedtransaction_read_onlyofftransform_null_equalsoffunix_socket_directoryunix_socket_groupunix_socket_permissions0777update_process_titleonvacuum_cost_delay0vacuum_cost_limit200vacuum_cost_page_dirty20vacuum_cost_page_hit1vacuum_cost_page_miss10vacuum_defer_cleanup_age0vacuum_freeze_min_age50000000vacuum_freeze_table_age150000000wal_block_size8192wal_buffers16MBwal_keep_segments0wal_levelminimalwal_receiver_status_interval10swal_segment_size16MBwal_sync_methodopen_datasyncwal_writer_delay200mswork_mem512MBxmlbinarybase64xmloptioncontentzero_damaged_pagesoffThanksGirish Subbaramu.", "msg_date": "Thu, 22 Aug 2013 11:30:08 +0000", "msg_from": "girish subbaramu <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 9.2.4 very slow on laptop with windows 8" }, { "msg_contents": "As I see only Windows7 supported ( with EnterpriseDB version of PostgreSQL\n9.2 Windows installer )\nhttp://www.enterprisedb.com/products-services-training/products-overview/postgresql-overview/supported-platforms-and-release-lif\n\nHave you been tested with PostgreSQL 9.3 rc1 ? same speed ?\nhttp://www.enterprisedb.com/products-services-training/pgdevdownload\n\nand some testing tips:\n- modify laptop power settings\n- compare disk speeds (laptop vs. desktop )\n- ...\n\nImre\n\n\n2013/8/22 girish subbaramu <[email protected]>\n\n>\n> Hi,\n>\n> I am running PostgreSQL 9.2.4 on windows 8 , 64 bit operating system ,\n> 4GB RAM.\n> A laptop with i3 - 3110M , 2.4 GHZ .\n> The database came bundled with wapp stack 5.4.17-0. We have an php\n> application that serves data from PostgreSQL 9.2.4.\n>\n> The configuration runs with very good performance (3 sec response php + db\n> ) on windows 7 32, 64 bit OS , 4GB RAM (desktops with i3-2100 3.10 GHZ ) .\n> But take around 25 seconds to render on windows 8 , the laptop.\n>\n> I was able to eliminate php , as the performance was as expected. (without\n> DB calls)\n> On the other part the database calls take more than 100 ms for simple\n> queries (Example a table with just 10 row sometimes takes around 126 ms).\n> This information i was able to collect from the pg_log.\n>\n> The php pages have multiple queries in them, a single query works as\n> expected, but running multiple queries in the page causes the db\n> performance to go down. Please note this setup is working fine (3 sec\n> overall including php ) on all windows 7 32, 64 bit OS , desktops.\n>\n> Appreciate help in giving me an direction on how to get to the issue.\n> The db size is 11mb only. Most of the tables have less than 100 rows with\n> appropriate indexes. Some tables have more than 1000 rows , are not queried\n> in the php pages . The super user login is used from php . (Changing super\n> user reserved connections did not help, tried changing shared _buffers and\n> other setting , none of the setting seem to have any effect on the db\n> performance )\n>\n> Following are the variable settings that works fine on on all windows 7\n> 32, 64 bit OS , desktops.\n>\n>\n>\n> Name Settingallow_system_table_modsoffapplication_name archive_command\n> (disabled)archive_modeoff archive_timeout0array_nullson\n> authentication_timeout1minautovacuumonautovacuum_analyze_scale_factor 0.1\n> autovacuum_analyze_threshold50autovacuum_freeze_max_age 200000000\n> autovacuum_max_workers3autovacuum_naptime 1minautovacuum_vacuum_cost_delay\n> 20msautovacuum_vacuum_cost_limit -1autovacuum_vacuum_scale_factor0.2\n> autovacuum_vacuum_threshold 50backslash_quotesafe_encodingbgwriter_delay200ms\n> bgwriter_lru_maxpages100bgwriter_lru_multiplier 2block_size8192bonjouroff\n> bonjour_namebytea_outputescapecheck_function_bodies on\n> checkpoint_completion_target0.5checkpoint_segments 3checkpoint_timeout5min\n> checkpoint_warning30s client_encodingUTF8client_min_messagesnotice\n> commit_delay0commit_siblings5 config_file\n> C:/xxxx~2/POSTGR~1/data/postgresql.confconstraint_exclusionpartition\n> cpu_index_tuple_cost0.005cpu_operator_cost0.0025 cpu_tuple_cost0.01\n> cursor_tuple_fraction0.1data_directory C:/xxx~2/POSTGR~1/dataDateStyleISO,\n> MDYdb_user_namespace offdeadlock_timeout1sdebug_assertionsoff\n> debug_pretty_printondebug_print_parseoff debug_print_planoff\n> debug_print_rewrittenoff default_statistics_target100default_tablespace\n> default_text_search_config pg_catalog.english\n> default_transaction_deferrableoffdefault_transaction_isolation read\n> committeddefault_transaction_read_onlyoffdefault_with_oids off\n> dynamic_library_path$libdireffective_cache_size 128MB\n> effective_io_concurrency0enable_bitmapscan onenable_hashaggon\n> enable_hashjoinon enable_indexonlyscanonenable_indexscanon enable_material\n> onenable_mergejoinonenable_nestloop onenable_seqscanonenable_sorton\n> enable_tidscanonescape_string_warningonevent_source PostgreSQL\n> exit_on_erroroffexternal_pid_file extra_float_digits0from_collapse_limit8\n> fsynconfull_page_writeson geqoongeqo_effort5geqo_generations 0\n> geqo_pool_size0geqo_seed0 geqo_selection_bias2geqo_threshold12\n> gin_fuzzy_search_limit 0hba_fileC:/xxxx~2/POSTGR~1/data/pg_hba.conf\n> hot_standby offhot_standby_feedbackoffident_file\n> C:/xxxx~2/POSTGR~1/data/pg_ident.conf ignore_system_indexesoff\n> integer_datetimeson IntervalStylepostgresjoin_collapse_limit8\n> krb_caseins_usersoffkrb_server_keyfilekrb_srvname postgreslc_collateEnglish_United\n> States.1252lc_ctype English_United States.1252lc_messagesEnglish_United\n> States.1252 lc_monetaryEnglish_United States.1252lc_numericEnglish_United\n> States.1252 lc_timeEnglish_United States.1252listen_addresses127.0.0.1\n> lo_compat_privilegesofflocal_preload_libraries log_autovacuum_min_duration\n> -1log_checkpointsoff log_connectionsofflog_destinationstderrlog_directory\n> pg_loglog_disconnectionsofflog_durationoff log_error_verbositydefault\n> log_executor_statsoff log_file_mode0600log_filename\n> postgresql-%Y-%m-%d_%H%M%S.log log_hostnameofflog_line_prefix\n> log_lock_waitsofflog_min_duration_statement-1 log_min_error_statementerror\n> log_min_messageswarning log_parser_statsofflog_planner_statsoff\n> log_rotation_age 1dlog_rotation_size10MBlog_statementnone\n> log_statement_statsofflog_temp_files-1 log_timezoneAsia/Calcutta\n> log_truncate_on_rotationoff logging_collectoronmaintenance_work_mem16MBmax_connections\n> 100max_files_per_process1000max_function_args 100max_identifier_length63\n> max_index_keys32 max_locks_per_transaction64max_pred_locks_per_transaction\n> 64max_prepared_transactions0max_stack_depth 2MBmax_standby_archive_delay\n> 30smax_standby_streaming_delay 30smax_wal_senders0password_encryptionon\n> port5432post_auth_delay0 pre_auth_delay0quote_all_identifiersoff\n> random_page_cost 4replication_timeout1minrestart_after_crash onsearch_path\n> \"$user\",viplsegment_size 1GBseq_page_cost1server_encodingUTF8\n> server_version9.2.4server_version_num90204session_replication_role origin\n> shared_buffers1GBshared_preload_libraries sql_inheritanceonssloff\n> ssl_ca_filessl_cert_fileserver.crtssl_ciphers\n> ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTHssl_crl_filessl_key_file server.key\n> ssl_renegotiation_limit512MBstandard_conforming_strings on\n> statement_timeout0stats_temp_directorypg_stat_tmp\n> superuser_reserved_connections3synchronize_seqscanson synchronous_commiton\n> synchronous_standby_names syslog_facilitynonesyslog_identpostgres\n> tcp_keepalives_count0tcp_keepalives_idle-1tcp_keepalives_interval -1\n> temp_buffers16MBtemp_file_limit-1 temp_tablespacesTimeZoneAsia/Calcutta\n> timezone_abbreviationsDefaulttrace_notifyoff trace_recovery_messageslog\n> trace_sortofftrack_activities ontrack_activity_query_size1024track_countson\n> track_functionsnonetrack_io_timingoff transaction_deferrableoff\n> transaction_isolationread committed transaction_read_onlyoff\n> transform_null_equalsoff unix_socket_directoryunix_socket_group\n> unix_socket_permissions0777update_process_titleon vacuum_cost_delay0\n> vacuum_cost_limit200vacuum_cost_page_dirty 20vacuum_cost_page_hit1\n> vacuum_cost_page_miss 10vacuum_defer_cleanup_age0vacuum_freeze_min_age50000000\n> vacuum_freeze_table_age150000000wal_block_size 8192wal_buffers16MB\n> wal_keep_segments0 wal_levelminimalwal_receiver_status_interval10s\n> wal_segment_size16MBwal_sync_methodopen_datasync wal_writer_delay200ms\n> work_mem512MB xmlbinarybase64xmloptioncontentzero_damaged_pages off\n>\n>\n>\n>\n> Thanks\n> Girish Subbaramu.\n>\n>\n\nAs I see only Windows7 supported  ( with EnterpriseDB version of PostgreSQL 9.2 Windows installer )http://www.enterprisedb.com/products-services-training/products-overview/postgresql-overview/supported-platforms-and-release-lif\nHave you been tested with PostgreSQL 9.3 rc1 ?     same speed ?http://www.enterprisedb.com/products-services-training/pgdevdownload\nand some testing tips:- modify laptop power settings- compare disk speeds (laptop vs. desktop )- ...Imre2013/8/22 girish subbaramu <[email protected]>\n\nHi,I am running PostgreSQL 9.2.4 on windows 8  , 64 bit operating system , 4GB RAM.\nA laptop with i3 - 3110M , 2.4 GHZ . The database  came bundled with wapp stack 5.4.17-0. We have an php application that serves data from PostgreSQL 9.2.4.\nThe configuration runs with very good performance (3 sec response php + db ) on windows 7   32, 64 bit OS , 4GB RAM (desktops with i3-2100 3.10 GHZ ) .\nBut take around 25 seconds to render on windows 8 , the laptop. I was able to eliminate php , as the performance was as expected. (without DB calls)On the other part the database calls take more than 100 ms for simple queries (Example a table with just 10 row sometimes takes around 126 ms).  This information i was able to collect from the pg_log.\nThe php pages have multiple queries in them, a single query works as expected, but running multiple queries in the page causes the db performance to go down. Please note this setup is working fine (3 sec  overall including php ) on all  windows 7   32, 64 bit OS , desktops.\nAppreciate help in giving me an direction on how to get to the issue.The db size is 11mb only. Most of the tables have less than 100 rows with appropriate indexes. Some tables have more than 1000 rows , are not queried  in the php pages . The super user login is used from php . (Changing super user reserved connections did not help, tried changing shared _buffers and other setting , none of the setting seem to have any effect on the db performance )\nFollowing are the variable settings that works fine on  on all  windows 7   32, 64 bit OS , desktops. \n\n Name\n\nSettingallow_system_table_modsoffapplication_name\narchive_command(disabled)archive_modeoff\narchive_timeout0array_nullson\nauthentication_timeout1minautovacuumonautovacuum_analyze_scale_factor\n0.1autovacuum_analyze_threshold50autovacuum_freeze_max_age\n200000000autovacuum_max_workers3autovacuum_naptime\n1minautovacuum_vacuum_cost_delay20msautovacuum_vacuum_cost_limit\n-1autovacuum_vacuum_scale_factor0.2autovacuum_vacuum_threshold\n50backslash_quotesafe_encodingbgwriter_delay\n\n200msbgwriter_lru_maxpages100bgwriter_lru_multiplier\n2block_size8192bonjouroff\nbonjour_namebytea_outputescapecheck_function_bodies\noncheckpoint_completion_target0.5checkpoint_segments\n3checkpoint_timeout5mincheckpoint_warning30s\nclient_encodingUTF8client_min_messagesnotice\ncommit_delay0commit_siblings5\nconfig_fileC:/xxxx~2/POSTGR~1/data/postgresql.confconstraint_exclusionpartition\ncpu_index_tuple_cost0.005cpu_operator_cost0.0025\ncpu_tuple_cost0.01cursor_tuple_fraction0.1\ndata_directory\nC:/xxx~2/POSTGR~1/dataDateStyleISO, MDYdb_user_namespace\noffdeadlock_timeout1sdebug_assertionsoff\ndebug_pretty_printondebug_print_parseoff\ndebug_print_planoffdebug_print_rewrittenoff\n\ndefault_statistics_target100default_tablespace\ndefault_text_search_config\npg_catalog.englishdefault_transaction_deferrableoff\ndefault_transaction_isolation\nread committeddefault_transaction_read_onlyoffdefault_with_oids\noffdynamic_library_path$libdireffective_cache_size\n128MBeffective_io_concurrency0enable_bitmapscan\nonenable_hashaggonenable_hashjoinon\nenable_indexonlyscanonenable_indexscanon\nenable_materialonenable_mergejoinonenable_nestloop\nonenable_seqscanonenable_sorton\nenable_tidscanonescape_string_warningonevent_source\nPostgreSQLexit_on_erroroffexternal_pid_file\nextra_float_digits0from_collapse_limit8\nfsynconfull_page_writeson\ngeqoongeqo_effort5geqo_generations\n\n0geqo_pool_size0geqo_seed0\ngeqo_selection_bias2geqo_threshold12gin_fuzzy_search_limit\n0hba_fileC:/xxxx~2/POSTGR~1/data/pg_hba.confhot_standby\noffhot_standby_feedbackoffident_fileC:/xxxx~2/POSTGR~1/data/pg_ident.conf\nignore_system_indexesoffinteger_datetimeson\nIntervalStylepostgresjoin_collapse_limit8\nkrb_caseins_usersoffkrb_server_keyfilekrb_srvname\npostgreslc_collateEnglish_United States.1252lc_ctype\nEnglish_United States.1252lc_messagesEnglish_United States.1252\nlc_monetaryEnglish_United States.1252lc_numericEnglish_United States.1252\nlc_timeEnglish_United States.1252listen_addresses127.0.0.1\nlo_compat_privilegesofflocal_preload_libraries\n\nlog_autovacuum_min_duration-1log_checkpointsoff\n\nlog_connectionsofflog_destinationstderrlog_directory\npg_loglog_disconnectionsofflog_durationoff\nlog_error_verbositydefaultlog_executor_statsoff\nlog_file_mode0600log_filenamepostgresql-%Y-%m-%d_%H%M%S.log\nlog_hostnameofflog_line_prefix\nlog_lock_waitsofflog_min_duration_statement-1\n\nlog_min_error_statementerrorlog_min_messageswarning\nlog_parser_statsofflog_planner_statsofflog_rotation_age\n1dlog_rotation_size10MBlog_statementnone\nlog_statement_statsofflog_temp_files-1\nlog_timezoneAsia/Calcuttalog_truncate_on_rotationoff\nlogging_collectoronmaintenance_work_mem16MB\n\nmax_connections100max_files_per_process1000max_function_args\n100max_identifier_length63max_index_keys32\nmax_locks_per_transaction64max_pred_locks_per_transaction\n64max_prepared_transactions0max_stack_depth\n\n2MBmax_standby_archive_delay30smax_standby_streaming_delay\n30smax_wal_senders0password_encryptionon\nport5432post_auth_delay0\npre_auth_delay0quote_all_identifiersoffrandom_page_cost\n4replication_timeout1minrestart_after_crash\n\nonsearch_path\"$user\",viplsegment_size\n\n1GBseq_page_cost1server_encodingUTF8\n\nserver_version9.2.4server_version_num90204\nsession_replication_role\noriginshared_buffers1GBshared_preload_libraries\nsql_inheritanceonssloff\nssl_ca_filessl_cert_fileserver.crtssl_ciphers\nALL:!ADH:!LOW:!EXP:!MD5:@STRENGTHssl_crl_filessl_key_file\nserver.keyssl_renegotiation_limit512MBstandard_conforming_strings\nonstatement_timeout0stats_temp_directorypg_stat_tmp\nsuperuser_reserved_connections3synchronize_seqscans\non\nsynchronous_commitonsynchronous_standby_names\nsyslog_facilitynonesyslog_identpostgres\ntcp_keepalives_count0tcp_keepalives_idle-1\ntcp_keepalives_interval\n-1temp_buffers16MBtemp_file_limit-1\ntemp_tablespacesTimeZoneAsia/Calcutta\ntimezone_abbreviationsDefaulttrace_notifyoff\n\ntrace_recovery_messageslogtrace_sortofftrack_activities\nontrack_activity_query_size1024track_counts\n\nontrack_functionsnonetrack_io_timingoff\ntransaction_deferrableofftransaction_isolationread committed\ntransaction_read_onlyofftransform_null_equalsoff\nunix_socket_directoryunix_socket_group\nunix_socket_permissions0777update_process_titleon\nvacuum_cost_delay0vacuum_cost_limit200vacuum_cost_page_dirty\n20vacuum_cost_page_hit1vacuum_cost_page_miss\n\n10vacuum_defer_cleanup_age0vacuum_freeze_min_age\n\n50000000vacuum_freeze_table_age150000000wal_block_size\n8192wal_buffers16MBwal_keep_segments0\n\nwal_levelminimalwal_receiver_status_interval10s\nwal_segment_size16MBwal_sync_methodopen_datasync\nwal_writer_delay200mswork_mem512MB\nxmlbinarybase64xmloptioncontentzero_damaged_pages\noffThanksGirish Subbaramu.", "msg_date": "Thu, 22 Aug 2013 17:25:51 +0200", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.2.4 very slow on laptop with windows 8" }, { "msg_contents": "On 22/08/13 12:30, girish subbaramu wrote:\n>\n> I am running PostgreSQL 9.2.4 on windows 8 , 64 bit operating system ,\n> 4GB RAM.\n\n> A laptop with i3 - 3110M , 2.4 GHZ .\n> The database came bundled with wapp stack 5.4.17-0. We have an php\n> application that serves data from PostgreSQL 9.2.4.\n>\n> The configuration runs with very good performance (3 sec response php +\n> db ) on windows 7 32, 64 bit OS , 4GB RAM (desktops with i3-2100 3.10\n> GHZ ) .\n> But take around 25 seconds to render on windows 8 , the laptop.\n>\n> I was able to eliminate php , as the performance was as expected.\n> (without DB calls)\n> On the other part the database calls take more than 100 ms for simple\n> queries (Example a table with just 10 row sometimes takes around 126\n> ms). This information i was able to collect from the pg_log.\n\nFirst step - check the antivirus / security tools aren't interfering. \nThat can slow you down immensely.\n\nSecond step - have a quick look in your performance monitoring (you can \nget to it through\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 23 Aug 2013 09:14:23 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.2.4 very slow on laptop with windows 8" }, { "msg_contents": "Hi ,\nThanks for the inputs and direction.With the help of the support table i narrowed down to windows 2008 and 2003 servers.32 bit postgres 9.2.4 was very slow on windows 2008 (64 bit ), on running postgres 9.2.4 64 bit the response time was similar to what i used to get on windows 7. I did not have to do any performance tuning , the defaults worked.This works good for me.With this i was able to resolve postgres issues. \nThanksGirish Subbaramu \n\nFrom: [email protected]\nDate: Thu, 22 Aug 2013 17:25:51 +0200\nSubject: Re: [PERFORM] PostgreSQL 9.2.4 very slow on laptop with windows 8\nTo: [email protected]\nCC: [email protected]\n\nAs I see only Windows7 supported ( with EnterpriseDB version of PostgreSQL 9.2 Windows installer )http://www.enterprisedb.com/products-services-training/products-overview/postgresql-overview/supported-platforms-and-release-lif\n\n\nHave you been tested with PostgreSQL 9.3 rc1 ? same speed ?http://www.enterprisedb.com/products-services-training/pgdevdownload\n\n\nand some testing tips:\n- modify laptop power settings\n- compare disk speeds (laptop vs. desktop )\n- ...\n\nImre\n\n2013/8/22 girish subbaramu <[email protected]>\n\n\n\n\n\n\nHi,\nI am running PostgreSQL 9.2.4 on windows 8 , 64 bit operating system , 4GB RAM.\n\nA laptop with i3 - 3110M , 2.4 GHZ . The database came bundled with wapp stack 5.4.17-0. We have an php application that serves data from PostgreSQL 9.2.4.\n\n\nThe configuration runs with very good performance (3 sec response php + db ) on windows 7 32, 64 bit OS , 4GB RAM (desktops with i3-2100 3.10 GHZ ) .\n\nBut take around 25 seconds to render on windows 8 , the laptop. \nI was able to eliminate php , as the performance was as expected. (without DB calls)On the other part the database calls take more than 100 ms for simple queries (Example a table with just 10 row sometimes takes around 126 ms). This information i was able to collect from the pg_log.\n\n\nThe php pages have multiple queries in them, a single query works as expected, but running multiple queries in the page causes the db performance to go down. Please note this setup is working fine (3 sec overall including php ) on all windows 7 32, 64 bit OS , desktops.\n\n\nAppreciate help in giving me an direction on how to get to the issue.The db size is 11mb only. Most of the tables have less than 100 rows with appropriate indexes. Some tables have more than 1000 rows , are not queried in the php pages . The super user login is used from php . (Changing super user reserved connections did not help, tried changing shared _buffers and other setting , none of the setting seem to have any effect on the db performance )\n\n\nFollowing are the variable settings that works fine on on all windows 7 32, 64 bit OS , desktops. \n\n\n\n Name\n\nSettingallow_system_table_modsoffapplication_name\n\narchive_command(disabled)archive_modeoff\n\narchive_timeout0array_nullson\n\nauthentication_timeout1minautovacuumonautovacuum_analyze_scale_factor\n\n0.1autovacuum_analyze_threshold50autovacuum_freeze_max_age\n\n200000000autovacuum_max_workers3autovacuum_naptime\n\n1minautovacuum_vacuum_cost_delay20msautovacuum_vacuum_cost_limit\n\n-1autovacuum_vacuum_scale_factor0.2autovacuum_vacuum_threshold\n\n50backslash_quotesafe_encodingbgwriter_delay\n\n200msbgwriter_lru_maxpages100bgwriter_lru_multiplier\n\n2block_size8192bonjouroff\n\nbonjour_namebytea_outputescapecheck_function_bodies\n\noncheckpoint_completion_target0.5checkpoint_segments\n\n3checkpoint_timeout5mincheckpoint_warning30s\n\nclient_encodingUTF8client_min_messagesnotice\n\ncommit_delay0commit_siblings5\n\nconfig_fileC:/xxxx~2/POSTGR~1/data/postgresql.confconstraint_exclusionpartition\n\ncpu_index_tuple_cost0.005cpu_operator_cost0.0025\n\ncpu_tuple_cost0.01cursor_tuple_fraction0.1\ndata_directory\nC:/xxx~2/POSTGR~1/dataDateStyleISO, MDYdb_user_namespace\n\noffdeadlock_timeout1sdebug_assertionsoff\n\ndebug_pretty_printondebug_print_parseoff\n\ndebug_print_planoffdebug_print_rewrittenoff\n\ndefault_statistics_target100default_tablespace\ndefault_text_search_config\npg_catalog.englishdefault_transaction_deferrableoff\ndefault_transaction_isolation\nread committeddefault_transaction_read_onlyoffdefault_with_oids\n\noffdynamic_library_path$libdireffective_cache_size\n\n128MBeffective_io_concurrency0enable_bitmapscan\n\nonenable_hashaggonenable_hashjoinon\n\nenable_indexonlyscanonenable_indexscanon\n\nenable_materialonenable_mergejoinonenable_nestloop\n\nonenable_seqscanonenable_sorton\n\nenable_tidscanonescape_string_warningonevent_source\n\nPostgreSQLexit_on_erroroffexternal_pid_file\n\nextra_float_digits0from_collapse_limit8\n\nfsynconfull_page_writeson\n\ngeqoongeqo_effort5geqo_generations\n\n0geqo_pool_size0geqo_seed0\n\ngeqo_selection_bias2geqo_threshold12gin_fuzzy_search_limit\n\n0hba_fileC:/xxxx~2/POSTGR~1/data/pg_hba.confhot_standby\n\noffhot_standby_feedbackoffident_fileC:/xxxx~2/POSTGR~1/data/pg_ident.conf\n\nignore_system_indexesoffinteger_datetimeson\n\nIntervalStylepostgresjoin_collapse_limit8\n\nkrb_caseins_usersoffkrb_server_keyfilekrb_srvname\n\npostgreslc_collateEnglish_United States.1252lc_ctype\n\nEnglish_United States.1252lc_messagesEnglish_United States.1252\n\nlc_monetaryEnglish_United States.1252lc_numericEnglish_United States.1252\n\nlc_timeEnglish_United States.1252listen_addresses127.0.0.1\n\nlo_compat_privilegesofflocal_preload_libraries\n\nlog_autovacuum_min_duration-1log_checkpointsoff\n\nlog_connectionsofflog_destinationstderrlog_directory\n\npg_loglog_disconnectionsofflog_durationoff\n\nlog_error_verbositydefaultlog_executor_statsoff\n\nlog_file_mode0600log_filenamepostgresql-%Y-%m-%d_%H%M%S.log\n\nlog_hostnameofflog_line_prefix\n\nlog_lock_waitsofflog_min_duration_statement-1\n\nlog_min_error_statementerrorlog_min_messageswarning\n\nlog_parser_statsofflog_planner_statsofflog_rotation_age\n\n1dlog_rotation_size10MBlog_statementnone\n\nlog_statement_statsofflog_temp_files-1\n\nlog_timezoneAsia/Calcuttalog_truncate_on_rotationoff\n\nlogging_collectoronmaintenance_work_mem16MB\n\nmax_connections100max_files_per_process1000max_function_args\n\n100max_identifier_length63max_index_keys32\n\nmax_locks_per_transaction64max_pred_locks_per_transaction\n\n64max_prepared_transactions0max_stack_depth\n\n2MBmax_standby_archive_delay30smax_standby_streaming_delay\n\n30smax_wal_senders0password_encryptionon\n\nport5432post_auth_delay0\n\npre_auth_delay0quote_all_identifiersoffrandom_page_cost\n\n4replication_timeout1minrestart_after_crash\n\nonsearch_path\"$user\",viplsegment_size\n\n1GBseq_page_cost1server_encodingUTF8\n\nserver_version9.2.4server_version_num90204\nsession_replication_role\noriginshared_buffers1GBshared_preload_libraries\n\nsql_inheritanceonssloff\n\nssl_ca_filessl_cert_fileserver.crtssl_ciphers\n\nALL:!ADH:!LOW:!EXP:!MD5:@STRENGTHssl_crl_filessl_key_file\n\nserver.keyssl_renegotiation_limit512MBstandard_conforming_strings\n\nonstatement_timeout0stats_temp_directorypg_stat_tmp\n\nsuperuser_reserved_connections3synchronize_seqscans\non\nsynchronous_commitonsynchronous_standby_names\n\nsyslog_facilitynonesyslog_identpostgres\n\ntcp_keepalives_count0tcp_keepalives_idle-1\ntcp_keepalives_interval\n-1temp_buffers16MBtemp_file_limit-1\n\ntemp_tablespacesTimeZoneAsia/Calcutta\n\ntimezone_abbreviationsDefaulttrace_notifyoff\n\ntrace_recovery_messageslogtrace_sortofftrack_activities\n\nontrack_activity_query_size1024track_counts\n\nontrack_functionsnonetrack_io_timingoff\n\ntransaction_deferrableofftransaction_isolationread committed\n\ntransaction_read_onlyofftransform_null_equalsoff\n\nunix_socket_directoryunix_socket_group\n\nunix_socket_permissions0777update_process_titleon\n\nvacuum_cost_delay0vacuum_cost_limit200vacuum_cost_page_dirty\n\n20vacuum_cost_page_hit1vacuum_cost_page_miss\n\n10vacuum_defer_cleanup_age0vacuum_freeze_min_age\n\n50000000vacuum_freeze_table_age150000000wal_block_size\n\n8192wal_buffers16MBwal_keep_segments0\n\nwal_levelminimalwal_receiver_status_interval10s\n\nwal_segment_size16MBwal_sync_methodopen_datasync\n\nwal_writer_delay200mswork_mem512MB\n\nxmlbinarybase64xmloptioncontentzero_damaged_pages\n\noff\n\n\n\n\nThanksGirish Subbaramu.\n \t\t \t \t\t \n\n\n\n \t\t \t \t\t \n\n\n\nHi ,Thanks for the inputs and direction.With the help of the support table i narrowed down to windows 2008 and 2003 servers.32 bit postgres 9.2.4 was very slow on windows 2008 (64 bit ), on running postgres 9.2.4 64 bit  the response time was similar to what i used to get on windows 7. I did not have to do any performance tuning , the defaults worked.This works good for me.With this i was able to resolve postgres issues. ThanksGirish Subbaramu From: [email protected]: Thu, 22 Aug 2013 17:25:51 +0200Subject: Re: [PERFORM] PostgreSQL 9.2.4 very slow on laptop with windows 8To: [email protected]: [email protected] I see only Windows7 supported  ( with EnterpriseDB version of PostgreSQL 9.2 Windows installer )http://www.enterprisedb.com/products-services-training/products-overview/postgresql-overview/supported-platforms-and-release-lif\nHave you been tested with PostgreSQL 9.3 rc1 ?     same speed ?http://www.enterprisedb.com/products-services-training/pgdevdownload\nand some testing tips:- modify laptop power settings- compare disk speeds (laptop vs. desktop )- ...Imre2013/8/22 girish subbaramu <[email protected]>\n\nHi,I am running PostgreSQL 9.2.4 on windows 8  , 64 bit operating system , 4GB RAM.\nA laptop with i3 - 3110M , 2.4 GHZ . The database  came bundled with wapp stack 5.4.17-0. We have an php application that serves data from PostgreSQL 9.2.4.\nThe configuration runs with very good performance (3 sec response php + db ) on windows 7   32, 64 bit OS , 4GB RAM (desktops with i3-2100 3.10 GHZ ) .\nBut take around 25 seconds to render on windows 8 , the laptop. I was able to eliminate php , as the performance was as expected. (without DB calls)On the other part the database calls take more than 100 ms for simple queries (Example a table with just 10 row sometimes takes around 126 ms).  This information i was able to collect from the pg_log.\nThe php pages have multiple queries in them, a single query works as expected, but running multiple queries in the page causes the db performance to go down. Please note this setup is working fine (3 sec  overall including php ) on all  windows 7   32, 64 bit OS , desktops.\nAppreciate help in giving me an direction on how to get to the issue.The db size is 11mb only. Most of the tables have less than 100 rows with appropriate indexes. Some tables have more than 1000 rows , are not queried  in the php pages . The super user login is used from php . (Changing super user reserved connections did not help, tried changing shared _buffers and other setting , none of the setting seem to have any effect on the db performance )\nFollowing are the variable settings that works fine on  on all  windows 7   32, 64 bit OS , desktops. \n\n Name\n\nSettingallow_system_table_modsoffapplication_name\narchive_command(disabled)archive_modeoff\narchive_timeout0array_nullson\nauthentication_timeout1minautovacuumonautovacuum_analyze_scale_factor\n0.1autovacuum_analyze_threshold50autovacuum_freeze_max_age\n200000000autovacuum_max_workers3autovacuum_naptime\n1minautovacuum_vacuum_cost_delay20msautovacuum_vacuum_cost_limit\n-1autovacuum_vacuum_scale_factor0.2autovacuum_vacuum_threshold\n50backslash_quotesafe_encodingbgwriter_delay\n\n200msbgwriter_lru_maxpages100bgwriter_lru_multiplier\n2block_size8192bonjouroff\nbonjour_namebytea_outputescapecheck_function_bodies\noncheckpoint_completion_target0.5checkpoint_segments\n3checkpoint_timeout5mincheckpoint_warning30s\nclient_encodingUTF8client_min_messagesnotice\ncommit_delay0commit_siblings5\nconfig_fileC:/xxxx~2/POSTGR~1/data/postgresql.confconstraint_exclusionpartition\ncpu_index_tuple_cost0.005cpu_operator_cost0.0025\ncpu_tuple_cost0.01cursor_tuple_fraction0.1\ndata_directory\nC:/xxx~2/POSTGR~1/dataDateStyleISO, MDYdb_user_namespace\noffdeadlock_timeout1sdebug_assertionsoff\ndebug_pretty_printondebug_print_parseoff\ndebug_print_planoffdebug_print_rewrittenoff\n\ndefault_statistics_target100default_tablespace\ndefault_text_search_config\npg_catalog.englishdefault_transaction_deferrableoff\ndefault_transaction_isolation\nread committeddefault_transaction_read_onlyoffdefault_with_oids\noffdynamic_library_path$libdireffective_cache_size\n128MBeffective_io_concurrency0enable_bitmapscan\nonenable_hashaggonenable_hashjoinon\nenable_indexonlyscanonenable_indexscanon\nenable_materialonenable_mergejoinonenable_nestloop\nonenable_seqscanonenable_sorton\nenable_tidscanonescape_string_warningonevent_source\nPostgreSQLexit_on_erroroffexternal_pid_file\nextra_float_digits0from_collapse_limit8\nfsynconfull_page_writeson\ngeqoongeqo_effort5geqo_generations\n\n0geqo_pool_size0geqo_seed0\ngeqo_selection_bias2geqo_threshold12gin_fuzzy_search_limit\n0hba_fileC:/xxxx~2/POSTGR~1/data/pg_hba.confhot_standby\noffhot_standby_feedbackoffident_fileC:/xxxx~2/POSTGR~1/data/pg_ident.conf\nignore_system_indexesoffinteger_datetimeson\nIntervalStylepostgresjoin_collapse_limit8\nkrb_caseins_usersoffkrb_server_keyfilekrb_srvname\npostgreslc_collateEnglish_United States.1252lc_ctype\nEnglish_United States.1252lc_messagesEnglish_United States.1252\nlc_monetaryEnglish_United States.1252lc_numericEnglish_United States.1252\nlc_timeEnglish_United States.1252listen_addresses127.0.0.1\nlo_compat_privilegesofflocal_preload_libraries\n\nlog_autovacuum_min_duration-1log_checkpointsoff\n\nlog_connectionsofflog_destinationstderrlog_directory\npg_loglog_disconnectionsofflog_durationoff\nlog_error_verbositydefaultlog_executor_statsoff\nlog_file_mode0600log_filenamepostgresql-%Y-%m-%d_%H%M%S.log\nlog_hostnameofflog_line_prefix\nlog_lock_waitsofflog_min_duration_statement-1\n\nlog_min_error_statementerrorlog_min_messageswarning\nlog_parser_statsofflog_planner_statsofflog_rotation_age\n1dlog_rotation_size10MBlog_statementnone\nlog_statement_statsofflog_temp_files-1\nlog_timezoneAsia/Calcuttalog_truncate_on_rotationoff\nlogging_collectoronmaintenance_work_mem16MB\n\nmax_connections100max_files_per_process1000max_function_args\n100max_identifier_length63max_index_keys32\nmax_locks_per_transaction64max_pred_locks_per_transaction\n64max_prepared_transactions0max_stack_depth\n\n2MBmax_standby_archive_delay30smax_standby_streaming_delay\n30smax_wal_senders0password_encryptionon\nport5432post_auth_delay0\npre_auth_delay0quote_all_identifiersoffrandom_page_cost\n4replication_timeout1minrestart_after_crash\n\nonsearch_path\"$user\",viplsegment_size\n\n1GBseq_page_cost1server_encodingUTF8\n\nserver_version9.2.4server_version_num90204\nsession_replication_role\noriginshared_buffers1GBshared_preload_libraries\nsql_inheritanceonssloff\nssl_ca_filessl_cert_fileserver.crtssl_ciphers\nALL:!ADH:!LOW:!EXP:!MD5:@STRENGTHssl_crl_filessl_key_file\nserver.keyssl_renegotiation_limit512MBstandard_conforming_strings\nonstatement_timeout0stats_temp_directorypg_stat_tmp\nsuperuser_reserved_connections3synchronize_seqscans\non\nsynchronous_commitonsynchronous_standby_names\nsyslog_facilitynonesyslog_identpostgres\ntcp_keepalives_count0tcp_keepalives_idle-1\ntcp_keepalives_interval\n-1temp_buffers16MBtemp_file_limit-1\ntemp_tablespacesTimeZoneAsia/Calcutta\ntimezone_abbreviationsDefaulttrace_notifyoff\n\ntrace_recovery_messageslogtrace_sortofftrack_activities\nontrack_activity_query_size1024track_counts\n\nontrack_functionsnonetrack_io_timingoff\ntransaction_deferrableofftransaction_isolationread committed\ntransaction_read_onlyofftransform_null_equalsoff\nunix_socket_directoryunix_socket_group\nunix_socket_permissions0777update_process_titleon\nvacuum_cost_delay0vacuum_cost_limit200vacuum_cost_page_dirty\n20vacuum_cost_page_hit1vacuum_cost_page_miss\n\n10vacuum_defer_cleanup_age0vacuum_freeze_min_age\n\n50000000vacuum_freeze_table_age150000000wal_block_size\n8192wal_buffers16MBwal_keep_segments0\n\nwal_levelminimalwal_receiver_status_interval10s\nwal_segment_size16MBwal_sync_methodopen_datasync\nwal_writer_delay200mswork_mem512MB\nxmlbinarybase64xmloptioncontentzero_damaged_pages\noffThanksGirish Subbaramu.", "msg_date": "Mon, 26 Aug 2013 07:23:32 +0000", "msg_from": "girish subbaramu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.2.4 very slow on laptop with windows 8" } ]
[ { "msg_contents": "Hi,\n\nI wasn't whether or not to mail to the novice mailing list of this one.\n Since this is performance related I'm posting it here, but I am definitely\na novice at postgresql - converting from mssql just now.\n\nI have a ~2.5gb table with ~5M rows of data. A query that groups by two\nfields and sums a floating field takes approximately 122 seconds. The\nequivalent query takes ~ 8seconds in my previous sql server express\ninstallation.\n\nI've tried to vary the parameters in postgresql.conf:\nI've tried wavering shared buffers from 512mb to 4000mb\nand working_mem from 64mb to 4000mb (i thought this might be the answer\nsince the execution plan (referenced below) indicates that the sort relies\non an External Merge Disk method)\nI've increased the default_statistics_target to 10000 and full vacuum\nanalyzed\nI realize there are no indexes on this table. My main concern is why I\ncan't get this to run as fast as in sql server express (which also has no\nindexes, and the same query takes about 8 seconds)\n\nMy system: Windows Professional 64-bit\n8 gb of ram\nIntel i5-220M CPU @ 2.5GHz\n\nHere is the link to the execution plan: http://explain.depesz.com/s/Ytx3\n\nThanks a lot in advance and do let me know if you require any more\ninformation to make an informed opinion,\nA\n\nHi,I wasn't whether or not to mail to the novice mailing list of this one.  Since this is performance related I'm posting it here, but I am definitely a novice at postgresql - converting from mssql just now.\nI have a ~2.5gb table with ~5M rows of data.  A query that groups by two fields and sums a floating field takes approximately 122 seconds.  The equivalent query takes ~ 8seconds in my previous sql server express installation.\nI've tried to vary the parameters in postgresql.conf:I've tried wavering shared buffers from 512mb to 4000mband working_mem from 64mb to 4000mb (i thought this might be the answer since the execution plan (referenced below) indicates that the sort relies on an External Merge Disk method)\nI've increased the default_statistics_target  to 10000 and full vacuum analyzedI realize there are no indexes on this table.  My main concern is why I can't get this to run as fast as in sql server express (which also has no indexes, and the same query takes about 8 seconds)\nMy system:  Windows Professional 64-bit8 gb of ramIntel i5-220M CPU @ 2.5GHz Here is the link to the execution plan:  http://explain.depesz.com/s/Ytx3\nThanks a lot in advance and do let me know if you require any more information to make an informed opinion,A", "msg_date": "Mon, 26 Aug 2013 01:09:59 -0400", "msg_from": "\"Adam Ma'ruf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Poor performance on simple queries compared to sql server express" }, { "msg_contents": "Hello\n\n\nIt is little bit strange - can you send a info about your PostgreSQL\nversion, send a query, and table description?\n\nIn this case, PostgreSQL should to use a hash aggregate, but from some\nstrange reason, pg didn't do it.\n\nSecond strange issue is speed of external sort - it is less than I can\nexpect.\n\nWhat I know - a usual advice for MS Win is setting minimal shared bufferes\n- 512MB can be too much there.\n\nRegards\n\nPavel Stehule\n\n\n2013/8/26 Adam Ma'ruf <[email protected]>\n\n> Hi,\n>\n> I wasn't whether or not to mail to the novice mailing list of this one.\n> Since this is performance related I'm posting it here, but I am definitely\n> a novice at postgresql - converting from mssql just now.\n>\n> I have a ~2.5gb table with ~5M rows of data. A query that groups by two\n> fields and sums a floating field takes approximately 122 seconds. The\n> equivalent query takes ~ 8seconds in my previous sql server express\n> installation.\n>\n> I've tried to vary the parameters in postgresql.conf:\n> I've tried wavering shared buffers from 512mb to 4000mb\n> and working_mem from 64mb to 4000mb (i thought this might be the answer\n> since the execution plan (referenced below) indicates that the sort relies\n> on an External Merge Disk method)\n> I've increased the default_statistics_target to 10000 and full vacuum\n> analyzed\n> I realize there are no indexes on this table. My main concern is why I\n> can't get this to run as fast as in sql server express (which also has no\n> indexes, and the same query takes about 8 seconds)\n>\n> My system: Windows Professional 64-bit\n> 8 gb of ram\n> Intel i5-220M CPU @ 2.5GHz\n>\n> Here is the link to the execution plan: http://explain.depesz.com/s/Ytx3\n>\n> Thanks a lot in advance and do let me know if you require any more\n> information to make an informed opinion,\n> A\n>\n\nHelloIt is little bit strange - can you send a info about your PostgreSQL version, send a query, and table description?In this case, PostgreSQL should to use a hash aggregate, but from some strange reason, pg didn't do it.\nSecond strange issue is speed of external sort - it is less than I can expect.What I know - a usual advice for MS Win is setting minimal shared bufferes - 512MB can be too much there. \nRegardsPavel Stehule2013/8/26 Adam Ma'ruf <[email protected]>\nHi,I wasn't whether or not to mail to the novice mailing list of this one.  Since this is performance related I'm posting it here, but I am definitely a novice at postgresql - converting from mssql just now.\nI have a ~2.5gb table with ~5M rows of data.  A query that groups by two fields and sums a floating field takes approximately 122 seconds.  The equivalent query takes ~ 8seconds in my previous sql server express installation.\nI've tried to vary the parameters in postgresql.conf:I've tried wavering shared buffers from 512mb to 4000mband working_mem from 64mb to 4000mb (i thought this might be the answer since the execution plan (referenced below) indicates that the sort relies on an External Merge Disk method)\nI've increased the default_statistics_target  to 10000 and full vacuum analyzedI realize there are no indexes on this table.  My main concern is why I can't get this to run as fast as in sql server express (which also has no indexes, and the same query takes about 8 seconds)\nMy system:  Windows Professional 64-bit8 gb of ramIntel i5-220M CPU @ 2.5GHz Here is the link to the execution plan:  http://explain.depesz.com/s/Ytx3\nThanks a lot in advance and do let me know if you require any more information to make an informed opinion,A", "msg_date": "Mon, 26 Aug 2013 08:36:37 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance on simple queries compared to sql server express" }, { "msg_contents": "Sure\n\nI just upgraded to 9.2.4. The query is:\nSELECT quebec_four\n , sierra\n , SUM(dollaramount) as dollaramount\n FROM alpha_quebec_echo\n GROUP BY quebec_four\n , sierra\n\nalpha_quebec_echo has 5,409,743 rows and 39 columns. Quebec_four and\nsierra are both varchar, dollar amount is a floating point field. It has\nno indexes (but neither did the mssql express table). Any other details\nyou need?\n\nThanks,\nA\n\n\nOn Mon, Aug 26, 2013 at 2:36 AM, Pavel Stehule <[email protected]>wrote:\n\n> Hello\n>\n>\n> It is little bit strange - can you send a info about your PostgreSQL\n> version, send a query, and table description?\n>\n> In this case, PostgreSQL should to use a hash aggregate, but from some\n> strange reason, pg didn't do it.\n>\n> Second strange issue is speed of external sort - it is less than I can\n> expect.\n>\n> What I know - a usual advice for MS Win is setting minimal shared bufferes\n> - 512MB can be too much there.\n>\n> Regards\n>\n> Pavel Stehule\n>\n>\n> 2013/8/26 Adam Ma'ruf <[email protected]>\n>\n>> Hi,\n>>\n>> I wasn't whether or not to mail to the novice mailing list of this one.\n>> Since this is performance related I'm posting it here, but I am definitely\n>> a novice at postgresql - converting from mssql just now.\n>>\n>> I have a ~2.5gb table with ~5M rows of data. A query that groups by two\n>> fields and sums a floating field takes approximately 122 seconds. The\n>> equivalent query takes ~ 8seconds in my previous sql server express\n>> installation.\n>>\n>> I've tried to vary the parameters in postgresql.conf:\n>> I've tried wavering shared buffers from 512mb to 4000mb\n>> and working_mem from 64mb to 4000mb (i thought this might be the answer\n>> since the execution plan (referenced below) indicates that the sort relies\n>> on an External Merge Disk method)\n>> I've increased the default_statistics_target to 10000 and full vacuum\n>> analyzed\n>> I realize there are no indexes on this table. My main concern is why I\n>> can't get this to run as fast as in sql server express (which also has no\n>> indexes, and the same query takes about 8 seconds)\n>>\n>> My system: Windows Professional 64-bit\n>> 8 gb of ram\n>> Intel i5-220M CPU @ 2.5GHz\n>>\n>> Here is the link to the execution plan: http://explain.depesz.com/s/Ytx3\n>>\n>> Thanks a lot in advance and do let me know if you require any more\n>> information to make an informed opinion,\n>> A\n>>\n>\n>\n\nSureI just upgraded to 9.2.4.  The query is:SELECT        quebec_four            , sierra            , SUM(dollaramount) as dollaramount  FROM alpha_quebec_echo\n  GROUP BY   quebec_four             , sierraalpha_quebec_echo has 5,409,743 rows and 39 columns.  Quebec_four and sierra are both varchar, dollar amount is a floating point field.  It has no indexes (but neither did the mssql express table).  Any other details you need?\nThanks,AOn Mon, Aug 26, 2013 at 2:36 AM, Pavel Stehule <[email protected]> wrote:\nHelloIt is little bit strange - can you send a info about your PostgreSQL version, send a query, and table description?\nIn this case, PostgreSQL should to use a hash aggregate, but from some strange reason, pg didn't do it.\nSecond strange issue is speed of external sort - it is less than I can expect.What I know - a usual advice for MS Win is setting minimal shared bufferes - 512MB can be too much there. \nRegardsPavel Stehule\n2013/8/26 Adam Ma'ruf <[email protected]>\nHi,I wasn't whether or not to mail to the novice mailing list of this one.  Since this is performance related I'm posting it here, but I am definitely a novice at postgresql - converting from mssql just now.\nI have a ~2.5gb table with ~5M rows of data.  A query that groups by two fields and sums a floating field takes approximately 122 seconds.  The equivalent query takes ~ 8seconds in my previous sql server express installation.\nI've tried to vary the parameters in postgresql.conf:I've tried wavering shared buffers from 512mb to 4000mband working_mem from 64mb to 4000mb (i thought this might be the answer since the execution plan (referenced below) indicates that the sort relies on an External Merge Disk method)\nI've increased the default_statistics_target  to 10000 and full vacuum analyzedI realize there are no indexes on this table.  My main concern is why I can't get this to run as fast as in sql server express (which also has no indexes, and the same query takes about 8 seconds)\nMy system:  Windows Professional 64-bit8 gb of ramIntel i5-220M CPU @ 2.5GHz Here is the link to the execution plan:  http://explain.depesz.com/s/Ytx3\nThanks a lot in advance and do let me know if you require any more information to make an informed opinion,A", "msg_date": "Mon, 26 Aug 2013 09:02:54 -0400", "msg_from": "\"Adam Ma'ruf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor performance on simple queries compared to sql server express" }, { "msg_contents": "On 26 Srpen 2013, 15:02, Adam Ma'ruf wrote:\n> Sure\n>\n> I just upgraded to 9.2.4. The query is:\n> SELECT quebec_four\n> , sierra\n> , SUM(dollaramount) as dollaramount\n> FROM alpha_quebec_echo\n> GROUP BY quebec_four\n> , sierra\n>\n> alpha_quebec_echo has 5,409,743 rows and 39 columns. Quebec_four and\n> sierra are both varchar, dollar amount is a floating point field. It has\n> no indexes (but neither did the mssql express table). Any other details\n> you need?\n>\n> Thanks,\n> A\n\nHi,\n\nIt's quite clear why the query is so slow - the plan is using on-disk sort\nwith ~5M rows, and that's consuming a lot of time (almost 120 seconds).\n\nI'm wondering why it chose the sort in the first place. I'd guess it'll\nchoose hash aggregate, which does not require sorted input.\n\nCan you try running \"set enable_sort = false\" and then explain of the query?\n\nIf that does not change the plan to \"HashAggregate\" instead of\n\"GroupAggregate\", please check and post values of enable_* and cost_*\nvariables.\n\nAnother question is why it's doing the sort on disk and not in memory. The\nexplain you've posted shows it requires ~430MB on disk, and in my\nexperience it usually requires ~3x that much to do the in-memory sort.\n\nI see you've set work_mem=4GB, is that correct? Can you try with a lower\nvalue - say, 1 or 2GB? I'm not sure how this works on Windows, though.\nMaybe there's some other limit (and SQL Server is not hitting it, because\nit's native Windows application).\n\nCan you prepare a testcase (table structure + data) and post it somewhere?\nOr at least the structure, if it's not possible to share the data.\n\nAlso, output from \"select * from pg_settings\" would be helpful.\n\nTomas\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Aug 2013 15:40:15 +0200", "msg_from": "\"Tomas Vondra\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance on simple queries compared to sql server express" }, { "msg_contents": "Hi\n\nThanks for the response. I reran the query but first ran the statement you\nprovided and set working mem to 2gb. It ended up taking 133s and group\naggregate was still used\n\nHere are the values you asked for:\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_indexonlyscan = on\n#enable_material = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\n#effective_cache_size = 6000MB\n\n\nThe output of select * from pg_statistics is large...should I attach it as\na separate file (not sure if that's allowed on these mailing lists)\n\nThe data is ~2.5gb, I can't think of any place I can upload it. I can\nprovide the columns and data type. it's a subset of public data from\nusaspending.gov\n\ncolumn_name, datatype, ordinal\nposition, nullable?\n idx integer 1 YES obligatedamount double precision 2 YES\nbaseandexercisedoptionsvalue double precision 3 YES\nbaseandalloptionsvalue double\nprecision 4 YES maj_fund_agency_cat character varying 5 YES\ncontractingofficeagencyid character varying 6 YES contractingofficeid\ncharacter\nvarying 7 YES fundingrequestingagencyid character varying 8 YES\nfundingrequestingofficeid character varying 9 YES signeddate date 10 YES\neffectivedate date 11 YES currentcompletiondate date 12 YES\nultimatecompletiondate date 13 YES lastdatetoorder character varying 14 YES\ntypeofcontractpricing character varying 15 YES multiyearcontract character\nvarying 16 YES vendorname character varying 17 YES dunsnumber character\nvarying 18 YES parentdunsnumber character varying 19 YES psc_cat character\nvarying 20 YES productorservicecode character varying 21 YES\nprincipalnaicscode character varying 22 YES piid character varying 23 YES\nmodnumber character varying 24 YES fiscal_year character varying 25 YES\nidvpiid character varying 26 YES extentcompeted character varying 27 YES\nnumberofoffersreceived double precision 28 YES competitiveprocedures character\nvarying 29 YES solicitationprocedures character varying 30 YES\nevaluatedpreference character varying 31 YES firm8aflag character varying\n32 YES sdbflag character varying 33 YES\nissbacertifiedsmalldisadvantagedbusiness character varying 34 YES\nwomenownedflag character varying 35 YES veteranownedflag character varying\n36 YES minorityownedbusinessflag character varying 37 YES data_source text\n38 YES psc_cd character varying 39 YES\n\n\n\n\nOn Mon, Aug 26, 2013 at 9:40 AM, Tomas Vondra <[email protected]> wrote:\n\n> On 26 Srpen 2013, 15:02, Adam Ma'ruf wrote:\n> > Sure\n> >\n> > I just upgraded to 9.2.4. The query is:\n> > SELECT quebec_four\n> > , sierra\n> > , SUM(dollaramount) as dollaramount\n> > FROM alpha_quebec_echo\n> > GROUP BY quebec_four\n> > , sierra\n> >\n> > alpha_quebec_echo has 5,409,743 rows and 39 columns. Quebec_four and\n> > sierra are both varchar, dollar amount is a floating point field. It has\n> > no indexes (but neither did the mssql express table). Any other details\n> > you need?\n> >\n> > Thanks,\n> > A\n>\n> Hi,\n>\n> It's quite clear why the query is so slow - the plan is using on-disk sort\n> with ~5M rows, and that's consuming a lot of time (almost 120 seconds).\n>\n> I'm wondering why it chose the sort in the first place. I'd guess it'll\n> choose hash aggregate, which does not require sorted input.\n>\n> Can you try running \"set enable_sort = false\" and then explain of the\n> query?\n>\n> If that does not change the plan to \"HashAggregate\" instead of\n> \"GroupAggregate\", please check and post values of enable_* and cost_*\n> variables.\n>\n> Another question is why it's doing the sort on disk and not in memory. The\n> explain you've posted shows it requires ~430MB on disk, and in my\n> experience it usually requires ~3x that much to do the in-memory sort.\n>\n> I see you've set work_mem=4GB, is that correct? Can you try with a lower\n> value - say, 1 or 2GB? I'm not sure how this works on Windows, though.\n> Maybe there's some other limit (and SQL Server is not hitting it, because\n> it's native Windows application).\n>\n> Can you prepare a testcase (table structure + data) and post it somewhere?\n> Or at least the structure, if it's not possible to share the data.\n>\n> Also, output from \"select * from pg_settings\" would be helpful.\n>\n> Tomas\n>\n>\n>\n\nHiThanks for the response.  I reran the query but first ran the statement you provided and set working mem to 2gb.  It ended up taking 133s and group aggregate was still used\nHere are the values you asked for:# - Planner Method Configuration -#enable_bitmapscan = on#enable_hashagg = on#enable_hashjoin = on#enable_indexscan = on\n#enable_indexonlyscan = on#enable_material = on#enable_mergejoin = on#enable_nestloop = on#enable_seqscan = on#enable_sort = on#enable_tidscan = on\n# - Planner Cost Constants -#seq_page_cost = 1.0 # measured on an arbitrary scale#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above#cpu_index_tuple_cost = 0.005 # same scale as above#cpu_operator_cost = 0.0025 # same scale as above\n#effective_cache_size = 6000MBThe output of select * from pg_statistics is large...should I attach it as a separate file (not sure if that's allowed on these mailing lists)\nThe data is ~2.5gb, I can't think of any place I can upload it.  I can provide the columns and data type.  it's a subset of public data from usaspending.gov\ncolumn_name,                                       datatype,  ordinal position, nullable?\n\n\nidx\ninteger\n1\nYES\n\n\nobligatedamount\ndouble precision\n2\nYES\n\n\nbaseandexercisedoptionsvalue\ndouble precision\n3\nYES\n\n\nbaseandalloptionsvalue\ndouble precision\n4\nYES\n\n\nmaj_fund_agency_cat\ncharacter varying\n5\nYES\n\n\ncontractingofficeagencyid\ncharacter varying\n6\nYES\n\n\ncontractingofficeid\ncharacter varying\n7\nYES\n\n\nfundingrequestingagencyid\ncharacter varying\n8\nYES\n\n\nfundingrequestingofficeid\ncharacter varying\n9\nYES\n\n\nsigneddate\ndate\n10\nYES\n\n\neffectivedate\ndate\n11\nYES\n\n\ncurrentcompletiondate\ndate\n12\nYES\n\n\nultimatecompletiondate\ndate\n13\nYES\n\n\nlastdatetoorder\ncharacter varying\n14\nYES\n\n\ntypeofcontractpricing\ncharacter varying\n15\nYES\n\n\nmultiyearcontract\ncharacter varying\n16\nYES\n\n\nvendorname\ncharacter varying\n17\nYES\n\n\ndunsnumber\ncharacter varying\n18\nYES\n\n\nparentdunsnumber\ncharacter varying\n19\nYES\n\n\npsc_cat\ncharacter varying\n20\nYES\n\n\nproductorservicecode\ncharacter varying\n21\nYES\n\n\nprincipalnaicscode\ncharacter varying\n22\nYES\n\n\npiid\ncharacter varying\n23\nYES\n\n\nmodnumber\ncharacter varying\n24\nYES\n\n\nfiscal_year\ncharacter varying\n25\nYES\n\n\nidvpiid\ncharacter varying\n26\nYES\n\n\nextentcompeted\ncharacter varying\n27\nYES\n\n\nnumberofoffersreceived\ndouble precision\n28\nYES\n\n\ncompetitiveprocedures\ncharacter varying\n29\nYES\n\n\nsolicitationprocedures\ncharacter varying\n30\nYES\n\n\nevaluatedpreference\ncharacter varying\n31\nYES\n\n\nfirm8aflag\ncharacter varying\n32\nYES\n\n\nsdbflag\ncharacter varying\n33\nYES\n\n\nissbacertifiedsmalldisadvantagedbusiness\ncharacter varying\n34\nYES\n\n\nwomenownedflag\ncharacter varying\n35\nYES\n\n\nveteranownedflag\ncharacter varying\n36\nYES\n\n\nminorityownedbusinessflag\ncharacter varying\n37\nYES\n\n\ndata_source\ntext\n38\nYES\n\n\npsc_cd\n \ncharacter varying\n39\nYES\nOn Mon, Aug 26, 2013 at 9:40 AM, Tomas Vondra <[email protected]> wrote:\nOn 26 Srpen 2013, 15:02, Adam Ma'ruf wrote:\n> Sure\n>\n> I just upgraded to 9.2.4.  The query is:\n> SELECT        quebec_four\n>             , sierra\n>             , SUM(dollaramount) as dollaramount\n>   FROM alpha_quebec_echo\n>   GROUP BY   quebec_four\n>              , sierra\n>\n> alpha_quebec_echo has 5,409,743 rows and 39 columns.  Quebec_four and\n> sierra are both varchar, dollar amount is a floating point field.  It has\n> no indexes (but neither did the mssql express table).  Any other details\n> you need?\n>\n> Thanks,\n> A\n\nHi,\n\nIt's quite clear why the query is so slow - the plan is using on-disk sort\nwith ~5M rows, and that's consuming a lot of time (almost 120 seconds).\n\nI'm wondering why it chose the sort in the first place. I'd guess it'll\nchoose hash aggregate, which does not require sorted input.\n\nCan you try running \"set enable_sort = false\" and then explain of the query?\n\nIf that does not change the plan to \"HashAggregate\" instead of\n\"GroupAggregate\", please check and post values of enable_* and cost_*\nvariables.\n\nAnother question is why it's doing the sort on disk and not in memory. The\nexplain you've posted shows it requires ~430MB on disk, and in my\nexperience it usually requires ~3x that much to do the in-memory sort.\n\nI see you've set work_mem=4GB, is that correct? Can you try with a lower\nvalue - say, 1 or 2GB? I'm not sure how this works on Windows, though.\nMaybe there's some other limit (and SQL Server is not hitting it, because\nit's native Windows application).\n\nCan you prepare a testcase (table structure + data) and post it somewhere?\nOr at least the structure, if it's not possible to share the data.\n\nAlso, output from \"select * from pg_settings\" would be helpful.\n\nTomas", "msg_date": "Tue, 27 Aug 2013 00:06:39 -0400", "msg_from": "\"Adam Ma'ruf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor performance on simple queries compared to sql server express" }, { "msg_contents": "Hi,\n\nOn 27.8.2013 06:06, Adam Ma'ruf wrote:\n> Hi\n> \n> Thanks for the response. I reran the query but first ran the statement\n> you provided and set working mem to 2gb. It ended up taking 133s and\n> group aggregate was still used\n\nOK.\n\n> \n> Here are the values you asked for:\n> # - Planner Method Configuration -\n> # - Planner Cost Constants -\n\nAll set to default, so seems fine to me.\n\n> \n> #seq_page_cost = 1.0# measured on an arbitrary scale\n> #random_page_cost = 4.0# same scale as above\n> #cpu_tuple_cost = 0.01# same scale as above\n> #cpu_index_tuple_cost = 0.005# same scale as above\n> #cpu_operator_cost = 0.0025# same scale as above\n> #effective_cache_size = 6000MB\n\nWell, if effective_cache_size is commented out, then it's still 128MB\n(default). But I don't think that matters here.\n\n> The output of select * from pg_statistics is large...should I attach it\n> as a separate file (not sure if that's allowed on these mailing lists)\n\nI haven't asked for pg_statistics dump. I asked for pg_settings (but I\nalready got most of the important pieces above).\n\n\n> The data is ~2.5gb, I can't think of any place I can upload it. I can\n\nThere's like a zillion of such places. E.g. Dropbox, Box, Wuala, Google\nDrive, mega.co.nz or one of the many other alternatives. All of them\ngive you ~5GB space for free.\n\nOr I could give you access to my FTP server, if that's what you prefer.\n\n> provide the columns and data type. it's a subset of public data from\n> usaspending.gov <http://usaspending.gov>\n\nIs there a simple way to download / filter the public data to get the\nsame dataset as you have?\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Aug 2013 23:37:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance on simple queries compared to sql server express" } ]
[ { "msg_contents": "Hello,\n\nStable and immutable functions do not improve performance when used within the GROUP BY clause.\nHere, the function will be called for each row.\n\nTo avoid it, I can replace the funtion by its arguments within GROUP BY.\n\nMaybe this hint is worth a note within the documentation on Function Volatility.\n\nI have the situation where queries are generating by the application and it would be a pain to extend the \"query builder\"\nin order to avoid this performance issue.\nSo I wonder if it would be possible for the query planner to recognize such cases and optimize the query internally ?\n\nbest regards,\nMarc Mamin\n\n\nhere an example to highlight possible performance loss:\n\ncreate temp table ref ( i int, r int);\ncreate temp table val ( i int, v int);\n\ninsert into ref select s,s%2 from generate_series(1,10000)s;\ninsert into val select s,s%2 from generate_series(1,10000)s;\n\ncreate or replace function getv(int) returns int as \n$$ select v+1 from val where i=$1; $$ language SQL stable;\n\nexplain analyze select getv(r) from ref group by r;\nTotal runtime: 5.928 ms\n\nexplain analyze select getv(r) from ref group by getv(r);\nTotal runtime: 3980.012 ms\n\n-- and more reasonably with an index:\n\ncreate unique index val_ux on val(i);\n\nexplain analyze select getv(r) from ref group by r;\nTotal runtime: 4.278 ms\n\nexplain analyze select getv(r) from ref group by getv(r);\nTotal runtime: 68.758 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Aug 2013 11:03:55 +0000", "msg_from": "Marc Mamin <[email protected]>", "msg_from_op": true, "msg_subject": "stable and immutable functions in GROUP BY clauses." }, { "msg_contents": "\n> \n> Hello,\n> \n> Stable and immutable functions do not improve performance when used within the GROUP BY clause.\n> Here, the function will be called for each row.\n> \n> To avoid it, I can replace the funtion by its arguments within GROUP BY.\n\nShame on me !\nThis is of course bullsh... It has nothing to do with immutability and can only applies to few cases\n\ne.g: it's fine for select x+1 ... group by x,\nbut not for select x^2 ... group by x\n\nMarc Mamin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 16 Sep 2013 19:22:39 +0000", "msg_from": "Marc Mamin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: stable and immutable functions in GROUP BY clauses." } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello\n\nWe have a SQL statement that with 9.1 takes ca 4000ms to finnish and\nwith 9.2 over 22000ms.\n\nThe explain analyze information is here:\n\nWith 9.1.:\nhttp://explain.depesz.com/s/5ou\n\nWith 9.2\nhttp://explain.depesz.com/s/d4vU\n\nThe SQL statement is:\n\nSELECT firstname || ' ' || lastname AS Name\nFROM Person R\nWHERE R.gender like 'F'\nAND 19 < (SELECT COUNT(DISTINCT filmId)\n FROM FilmParticipation F\n WHERE F.partType = 'director' AND\n F.personId = R.personId )\n AND NOT EXISTS (\n SELECT *\n FROM FilmParticipation D\n WHERE D.partType = 'director'\n AND D.personId = R.personId\n AND NOT EXISTS (\n SELECT *\n FROM FilmParticipation C\n WHERE C.partType = 'cast'\n AND C.filmId = D.filmId\n AND C.personId = D.personId\n )\n )\n;\n\n\nThe tables information:\n\n# SELECT count(*) from filmparticipation;\n count\n- ----------\n 10835351\n(1 row)\n\n# SELECT pg_size_pretty(pg_table_size('filmparticipation'));\n pg_size_pretty\n- ----------------\n 540 MB\n(1 row)\n\n# SELECT count(*) from person;\n count\n- ---------\n 1709384\n(1 row)\n\n# SELECT pg_size_pretty(pg_table_size('person'));\n pg_size_pretty\n- ----------------\n 85 MB\n(1 row)\n\n\nWe can see that the query plan is very different between versions and\nthat 9.2 is really wrong with the number of rows involved. Why is 9.2\ntaking so wrong about the number of rows involved in some parts of the\nplan?\n\nSome additional information:\n\n* VACUUM ANALYZE has been run in both databases.\n* Both databases are running on servers running RHEL6.3.\n* The relevant parameters changed from the default configuration are:\n\n9.1:\n- ----\n\n checkpoint_segments | 128\n client_encoding | UTF8\n effective_cache_size | 28892MB\n maintenance_work_mem | 256MB\n max_connections | 400\n max_stack_depth | 4MB\n random_page_cost | 2\n server_encoding | UTF8\n shared_buffers | 8026MB\n ssl | on\n ssl_renegotiation_limit | 0\n wal_buffers | 16MB\n wal_level | archive\n wal_sync_method | fdatasync\n work_mem | 16MB\n\n\n9.2:\n- ----\n\n checkpoint_segments | 128\n client_encoding | UTF8\n effective_cache_size | 28892MB\n maintenance_work_mem | 256MB\n max_connections | 400\n max_stack_depth | 4MB\n random_page_cost | 2\n server_encoding | UTF8\n shared_buffers | 8026MB\n ssl | on\n ssl_renegotiation_limit | 0\n wal_buffers | 16MB\n wal_level | archive\n wal_sync_method | fdatasync\n work_mem | 16MB\n\n\nAny ideas on why this is happening and how to fix it?\n\nThanks in advance for your time.\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAlIbSyoACgkQBhuKQurGihTOYwCfWC/ptAuMQ1pxFcplq9bHfBi3\nuekAnj+nll/Z2Lr8kFgPAB6Fx0Kop4/0\n=3TPA\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Aug 2013 14:33:46 +0200", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "SQL statement over 500% slower with 9.2 compared with 9.1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 08/26/2013 02:33 PM, Rafael Martinez wrote:\n[............]\n> The SQL statement is:\n> \n> SELECT firstname || ' ' || lastname AS Name FROM Person R WHERE\n> R.gender like 'F' AND 19 < (SELECT COUNT(DISTINCT filmId) FROM\n> FilmParticipation F WHERE F.partType = 'director' AND F.personId =\n> R.personId ) AND NOT EXISTS ( SELECT * FROM\n> FilmParticipation D WHERE D.partType = 'director' AND D.personId\n> = R.personId AND NOT EXISTS ( SELECT * FROM FilmParticipation\n> C WHERE C.partType = 'cast' AND C.filmId = D.filmId AND\n> C.personId = D.personId ) ) ;\n> \n> \n[.............]\n> \n> We can see that the query plan is very different between versions\n> and that 9.2 is really wrong with the number of rows involved. Why\n> is 9.2 taking so wrong about the number of rows involved in some\n> parts of the plan?\n> \n\nHei\n\nMore information:\n\nIf we turn off enable_indexscan the runtime gets more similar to the\none we get with 9.1, we are down to 4200ms.\n\nThe query plan with this configuration is here:\nhttp://explain.depesz.com/s/jVR\n\nThe question remains the same, why is 9.2 using such a different and\nbad plan compared to 9.1, when the data and the configuration are the\nsame?\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAlIcbx8ACgkQBhuKQurGihReJgCcCiEfGQ0rZHcazlN3Ihb2PeCn\njOsAnjkh1M0j4r1DQJ4Xb1djZ+y4mji3\n=Td8b\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Aug 2013 11:19:27 +0200", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL statement over 500% slower with 9.2 compared with\n 9.1" }, { "msg_contents": "On 27.8.2013 11:19, Rafael Martinez wrote:\n> On 08/26/2013 02:33 PM, Rafael Martinez wrote:\n> [............]\n>> The SQL statement is:\n> \n>> SELECT firstname || ' ' || lastname AS Name FROM Person R WHERE\n>> R.gender like 'F' AND 19 < (SELECT COUNT(DISTINCT filmId) FROM\n>> FilmParticipation F WHERE F.partType = 'director' AND F.personId =\n>> R.personId ) AND NOT EXISTS ( SELECT * FROM\n>> FilmParticipation D WHERE D.partType = 'director' AND D.personId\n>> = R.personId AND NOT EXISTS ( SELECT * FROM FilmParticipation\n>> C WHERE C.partType = 'cast' AND C.filmId = D.filmId AND\n>> C.personId = D.personId ) ) ;\n> \n> \n> [.............]\n> \n>> We can see that the query plan is very different between versions\n>> and that 9.2 is really wrong with the number of rows involved. Why\n>> is 9.2 taking so wrong about the number of rows involved in some\n>> parts of the plan?\n> \n> \n> Hei\n> \n> More information:\n> \n> If we turn off enable_indexscan the runtime gets more similar to the\n> one we get with 9.1, we are down to 4200ms.\n> \n> The query plan with this configuration is here:\n> http://explain.depesz.com/s/jVR\n> \n> The question remains the same, why is 9.2 using such a different and\n> bad plan compared to 9.1, when the data and the configuration are the\n> same?\n\nHi,\n\nseems the problem is mostly about the inner-most query, i.e. this:\n\n SELECT *\n FROM FilmParticipation C\n WHERE C.partType = 'cast'\n AND C.filmId = D.filmId\n AND C.personId = D.personId\n )\n\nIn 9.2 it's estimated to return 1 row, but it returns 595612 of them (or\n97780 after materialization). I believe this is the culprit that causes\ncost estimates that are way off, and that in turn leads to choice of\n\"cheaper\" plan that actually takes much longer to evaluate.\n\nBecause the slow plan is estimated to \"cost\" 122367017.97 while the fast\none 335084834.95 (i.e. 3x more).\n\nI don't immediately see where's the problem - maybe some other hacker on\nthis list can. Can you prepare a testcase for this? I.e. a structure of\nthe tables + data so that we can reproduce it?\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Aug 2013 23:27:25 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL statement over 500% slower with 9.2 compared with\n 9.1" }, { "msg_contents": "On Monday, August 26, 2013, Rafael Martinez wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Hello\n>\n> We have a SQL statement that with 9.1 takes ca 4000ms to finnish and\n> with 9.2 over 22000ms.\n>\n> The explain analyze information is here:\n>\n\nCould you do explain (analyze, buffers) of these?\n\n\n>\n> With 9.1.:\n> http://explain.depesz.com/s/5ou\n>\n> With 9.2\n> http://explain.depesz.com/s/d4vU\n>\n> The SQL statement is:\n>\n> SELECT firstname || ' ' || lastname AS Name\n> FROM Person R\n> WHERE R.gender like 'F'\n> AND 19 < (SELECT COUNT(DISTINCT filmId)\n> FROM FilmParticipation F\n> WHERE F.partType = 'director' AND\n> F.personId = R.personId )\n>\n\nWhat happens if you excise the \"19 < (select ...)\" clause?\n\nThat would greatly simplify the analysis, assuming the problem remains.\n\nHow many distinct filmId are there?\n\n\n\n>\n> We can see that the query plan is very different between versions and\n> that 9.2 is really wrong with the number of rows involved. Why is 9.2\n> taking so wrong about the number of rows involved in some parts of the\n> plan?\n>\n\nMost directors are not also actors, so there is a strong negative\ncorrelation that PostgreSQL is not aware of. However, I think if you could\nget 9.1 to report the same path, it would be just as wrong on that\nestimate. But since it doesn't report the same path, you don't see how\nwrong it is.\n\nTry running:\n\nexplain (analyze, buffers)\n SELECT D.personId\n FROM FilmParticipation D\n WHERE D.partType = 'director'\n --AND D.personId = R.personId\n AND NOT EXISTS (\n SELECT *\n FROM FilmParticipation C\n WHERE C.partType = 'cast'\n AND C.filmId = D.filmId\n AND C.personId = D.personId\n );\n\nOn both 9.1 and 9.2.\n\nCheers,\n\nJeff\n\nOn Monday, August 26, 2013, Rafael Martinez wrote:-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello\n\nWe have a SQL statement that with 9.1 takes ca 4000ms to finnish and\nwith 9.2 over 22000ms.\n\nThe explain analyze information is here:Could you do explain (analyze, buffers) of these?   \n\nWith 9.1.:\nhttp://explain.depesz.com/s/5ou\n\nWith 9.2\nhttp://explain.depesz.com/s/d4vU\n\nThe SQL statement is:\n\nSELECT  firstname || ' ' || lastname AS Name\nFROM    Person R\nWHERE  R.gender like 'F'\nAND  19 < (SELECT COUNT(DISTINCT filmId)\n              FROM   FilmParticipation F\n              WHERE  F.partType = 'director' AND\n                     F.personId = R.personId    )What happens if you excise the \"19 < (select ...)\" clause?That would greatly simplify the analysis, assuming the problem remains.\nHow many distinct filmId are there?\n\nWe can see that the query plan is very different between versions and\nthat 9.2 is really wrong with the number of rows involved. Why is 9.2\ntaking so wrong about the number of rows involved in some parts of the\nplan?Most directors are not also actors, so there is a strong negative correlation that PostgreSQL is not aware of. However, I think if you could get 9.1 to report the same path, it would be just as wrong on that estimate.  But since it doesn't report the same path, you don't see how wrong it is.\nTry running:explain (analyze, buffers) SELECT  D.personId                FROM    FilmParticipation D                WHERE   D.partType = 'director'\n                        --AND D.personId = R.personId                        AND NOT EXISTS (                                SELECT  *                                FROM    FilmParticipation C\n                                WHERE   C.partType = 'cast'                                        AND C.filmId = D.filmId                                        AND C.personId = D.personId\n                                       );On both 9.1 and 9.2.Cheers,Jeff", "msg_date": "Tue, 27 Aug 2013 21:10:11 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL statement over 500% slower with 9.2 compared with 9.1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 08/27/2013 11:27 PM, Tomas Vondra wrote:\n[............]\n\n> I don't immediately see where's the problem - maybe some other\n> hacker on this list can. Can you prepare a testcase for this? I.e.\n> a structure of the tables + data so that we can reproduce it?\n> \n\nHello\n\nOf course, you can download a SQL dump of the tables involved, here:\nhttp://folk.uio.no/rafael/filmdatabase_testcase.sql.gz\n\nThis file is 357M gunzipped and 101M gzipped. When restored in a\ndatabase it will use 1473MB.\n\n# \\d+\n List of relations\n Schema | Name | Type | Owner | Size | Description\n- --------+-------------------+-------+----------+--------+-------------\n public | filmitem | table | postgres | 41 MB |\n public | filmparticipation | table | postgres | 540 MB |\n public | person | table | postgres | 85 MB |\n(3 rows)\n\n[dbpg-hotel-utv:5432/postgres@fdb_testcase][]# \\di+\n List of relations\n Schema | Name | Type | Owner |\nTable | Size | Description\n-\n--------+--------------------------------+-------+----------+-------------------+--------+-------------\n public | filmitempkey | index | postgres | filmitem\n | 26 MB |\n public | filmparticipationfilmidindex | index | postgres |\nfilmparticipation | 232 MB |\n public | filmparticipationpersonidindex | index | postgres |\nfilmparticipation | 232 MB |\n public | filmparticipationpkey | index | postgres |\nfilmparticipation | 232 MB |\n public | personlastnameindex | index | postgres | person\n | 41 MB |\n public | personpkey | index | postgres | person\n | 37 MB |\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAlIdvbkACgkQBhuKQurGihTZ0ACgk5ZpAvBFdhJs7A3xm3h80VhR\nAX4AoIp+tSeeQtmmQh7ShP5WFI3hS+gp\n=wK/M\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Aug 2013 11:07:05 +0200", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL statement over 500% slower with 9.2 compared with\n 9.1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 08/28/2013 06:10 AM, Jeff Janes wrote:\n> On Monday, August 26, 2013, Rafael Martinez wrote:\n\nHei\n\n> \n> Could you do explain (analyze, buffers) of these?\n> \n\nWith 9.1:\nhttp://explain.depesz.com/s/FMe\n\nwith 9.2:\nhttp://explain.depesz.com/s/Z1j\n\n\n> \n> What happens if you excise the \"19 < (select ...)\" clause? That \n> would greatly simplify the analysis, assuming the problem remains.\n> \n\nWith 9.1:\nhttp://explain.depesz.com/s/DhuV\n\nWith 9.2:\nI do not get a result in a reasonable time, after several minuttes I\ncancel the query.\n\n\n> How many distinct filmId are there?\n> \n\n count\n- --------\n 934752\n\n\n> \n> Most directors are not also actors, so there is a strong negative \n> correlation that PostgreSQL is not aware of. However, I think if \n> you could get 9.1 to report the same path, it would be just as \n> wrong on that estimate. But since it doesn't report the same\n> path, you don't see how wrong it is.\n> \n> Try running:\n> \n> explain (analyze, buffers) SELECT D.personId FROM \n> FilmParticipation D WHERE D.partType = 'director' --AND \n> D.personId = R.personId AND NOT EXISTS ( SELECT * FROM \n> FilmParticipation C WHERE C.partType = 'cast' AND C.filmId = \n> D.filmId AND C.personId = D.personId );\n> \n> On both 9.1 and 9.2.\n> \n\nSame result with both:\n\nwith 9.1:\nhttp://explain.depesz.com/s/fdO\n\nWith 9.2\nhttp://explain.depesz.com/s/gHz\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAlIdzb4ACgkQBhuKQurGihSGEgCeP6frW7l65IphXFUjw80VMZun\nqO0An1++ZB7IGQ0MwR4wphWmlcYGXFDD\n=9fg4\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Aug 2013 12:15:26 +0200", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL statement over 500% slower with 9.2 compared with\n 9.1" }, { "msg_contents": "Rafael Martinez <[email protected]> writes:\n> We have a SQL statement that with 9.1 takes ca 4000ms to finnish and\n> with 9.2 over 22000ms.\n> ...\n> We can see that the query plan is very different between versions and\n> that 9.2 is really wrong with the number of rows involved. Why is 9.2\n> taking so wrong about the number of rows involved in some parts of the\n> plan?\n\n9.1's no better. The reason you don't get a similar plan out of 9.1\nis that it doesn't flatten the nested EXISTS sub-selects, so that a\nparameterized nestloop plan is the best it can do no matter what.\n9.2 is able to consider more types of plan for this query, and it's\nfinding one that it thinks is cheaper. Unfortunately, parameterized\nnestloop really is the best thing in this specific case.\n\nI think the rowcount estimation error that's actually serious is the one\nfor the \"19 < (Subplan 1)\" condition, where it's expecting 161252 rows but\nreality is only 179. If that were even just one order of magnitude closer\nto reality, the other plan style would look cheaper.\n\nUnfortunately, I can't offhand think of anything you can do to improve\nthe estimation of that condition as-is. Maybe there's some other way to\nphrase the query? The current coding of the query looks rather like it's\nbeen tuned for the one case that pre-9.2 releases can manage to do well.\n\nIf you don't want to do any major rewriting, you could probably stick an\nOFFSET 0 into the outer EXISTS sub-select (and/or the inner one) to get\nsomething similar to the 9.1 plan.\n\nFor some context see commit 0816fad6eebddb8f1f0e21635e46625815d690b9 and\nthe previous commits it references.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Aug 2013 15:08:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL statement over 500% slower with 9.2 compared with 9.1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 08/28/2013 09:08 PM, Tom Lane wrote:\n[..........]\n\n> \n> If you don't want to do any major rewriting, you could probably\n> stick an OFFSET 0 into the outer EXISTS sub-select (and/or the\n> inner one) to get something similar to the 9.1 plan.\n> \n\nThank you for your help.\n\nUsing OFFSET 0 in\n\nSELECT *\nFROM FilmParticipation C\nWHERE C.partType = 'cast'\nAND C.filmId = D.filmId\nAND C.personId = D.personId\nOFFSET 0\n\ngive us a result similar to 9.1.\n\nThis SQL is used as an example in one of the database courses at the\nUniversity. I will send them this information and they can decide if\nthey want to rewrite the statement or use the OFFSET trick.\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAlIm1/8ACgkQBhuKQurGihRAogCePl6G51w8dfYMruj+qSm4Vsjl\ncoMAn2sjyv6PcfsKhASC7ct0WI4YKRtJ\n=FdeD\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 04 Sep 2013 08:49:35 +0200", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL statement over 500% slower with 9.2 compared with\n 9.1" } ]
[ { "msg_contents": "Hello.\n\nExist 2 identical server DELL PowerEdge™ R720, CPU Dual Intel® Xeon®\nE5-2620 Hexa-Core inkl, RAM 256Gb, RAID-10 8 x 600 GB SAS 6 Gb/s 15000 rpm.\n\n$ cat /etc/fedora-release\nFedora release 19\n\n$ postgres --version\npostgres (PostgreSQL) 9.2.4\n\nData ~220Gb and Indexes ~140Gb\n\niowait ~0.2-0.5. Disk usage only write ~0-2 Mb/sec\n\nOn each installed pg_bouncer. Pool size 24.\n\nOn Master in peak load ~1200 request/sec, ~30 ms/request avg, 24 CPU ~95% -\nthis is no problem\n$ perf top\n 21,14% [kernel] [k] isolate_freepages_block\n 12,27% [unknown] [.] 0x00007fc1bb303be0\n 5,93% postgres [.] hash_search_with_hash_value\n 4,85% libbz2.so.1.0.6 [.] 0x000000000000a6e0\n 2,70% postgres [.] PinBuffer\n 2,34% postgres [.] slot_deform_tuple\n 1,92% libbz2.so.1.0.6 [.] BZ2_compressBlock\n 1,85% postgres [.] LWLockAcquire\n 1,69% postgres [.] heap_page_prune_opt\n 1,48% postgres [.] _bt_checkkeys\n 1,40% [kernel] [k] page_fault\n 1,36% postgres [.] _bt_compare\n 1,23% postgres [.] heap_hot_search_buffer\n 1,19% [kernel] [k] get_pageblock_flags_group\n 1,18% postgres [.] AllocSetAlloc\n\nOn Slave max ~400-500 request/sec, ~200 and up 24 ms/request avg, 24 CPU\n~95% - this is problem\n$ perf top\n 30,10% postgres [.] s_lock\n 22,90% [unknown] [.] 0x0000000000729cfe\n 4,98% postgres [.] RecoveryInProgress.part.9\n 4,89% postgres [.] LWLockAcquire\n 4,57% postgres [.] hash_search_with_hash_value\n 3,50% postgres [.] PinBuffer\n 2,31% postgres [.] heap_page_prune_opt\n 2,27% postgres [.] LWLockRelease\n 1,18% postgres [.] heap_hot_search_buffer\n 1,03% postgres [.] AllocSetAlloc\n...\n\nSlave at a much lower load than the master hangs on the function s_lock.\nWhat can be done about it?\n\nOn Slave ~300 request/sec, ~5-8 ms/request avg, cpu usage ~20% - normal\nwork by small load\n$ perf top\n 10,74% postgres [.] hash_search_with_hash_value\n 4,94% postgres [.] PinBuffer\n 4,61% postgres [.] AllocSetAlloc\n 3,57% postgres [.] heap_page_prune_opt\n 3,24% postgres [.] LWLockAcquire\n 2,47% postgres [.] heap_hot_search_buffer\n 2,11% postgres [.] SearchCatCache\n 1,90% postgres [.] LWLockRelease\n 1,87% postgres [.] _bt_compare\n 1,68% postgres [.] FunctionCall2Coll\n 1,46% postgres [.] _bt_checkkeys\n 1,24% postgres [.] copyObject\n 1,15% postgres [.] RecoveryInProgress.part.9\n 1,05% postgres [.] slot_deform_tuple\n...\n\n\nConfiguration Master postgres.conf\nlisten_addresses = '*'\nmax_connections = 100\nshared_buffers = 200GB\nwork_mem = 20MB\nmaintenance_work_mem = 2GB\neffective_io_concurrency = 4\nwal_level = hot_standby\nfsync = on\nsynchronous_commit = off\nfull_page_writes = on\nwal_writer_delay = 200ms\ncheckpoint_segments = 100\ncheckpoint_timeout = 15min\ncheckpoint_completion_target = 0.9\narchive_mode = on\narchive_command = 'pbzip2 -f -c %p > /opt/pg/wals/wals/%f.bz2'\nmax_wal_senders = 3\nrandom_page_cost = 0.5\ncpu_tuple_cost = 0.02\ncpu_index_tuple_cost = 0.01\ncpu_operator_cost = 0.005\neffective_cache_size = 40GB\ndefault_statistics_target = 300\nlogging_collector = on\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\nlog_truncate_on_rotation = on\nlog_rotation_age = 1d\nlog_rotation_size = 0\nlog_min_duration_statement = 1000\nlog_checkpoints = on\nlog_line_prefix = '%t %p %c-%l %x %q(%u, %d, %r, %a) '\nlog_lock_waits = on\ntrack_io_timing = on\ntrack_activity_query_size = 4096\nautovacuum = on\nlog_autovacuum_min_duration = 0\nautovacuum_freeze_max_age = 1500000000\ndatestyle = 'iso, dmy'\ntimezone = 'Europe/Moscow'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'ru_RU.UTF-8'\nlc_numeric = 'ru_RU.UTF-8'\nlc_time = 'ru_RU.UTF-8'\ndefault_text_search_config = 'pg_catalog.russian'\nshared_preload_libraries = 'pg_stat_statements'\npg_stat_statements.max = 1000\npg_stat_statements.track = all\nmax_locks_per_transaction = 264\n\nConfiguration Slave postgres.conf\nlisten_addresses = '*'\nmax_connections = 100\nshared_buffers = 200GB\nwork_mem = 20MB\nmaintenance_work_mem = 2GB\neffective_io_concurrency = 4\nwal_level = hot_standby\nfsync = on\nsynchronous_commit = off\nfull_page_writes = on\nwal_writer_delay = 200ms\ncommit_delay = 1000\ncommit_siblings = 2\ncheckpoint_segments = 100\ncheckpoint_timeout = 15min\ncheckpoint_completion_target = 0.9\narchive_mode = on\narchive_command = 'pbzip2 -f -c %p > /opt/pg/wals/wals/%f.bz2'\nmax_wal_senders = 4\nhot_standby = on\nmax_standby_archive_delay = 30s\nmax_standby_streaming_delay = 30s\nhot_standby_feedback = on\nrandom_page_cost = 0.5\ncpu_tuple_cost = 0.02\ncpu_index_tuple_cost = 0.01\ncpu_operator_cost = 0.005\neffective_cache_size = 40GB\ndefault_statistics_target = 300\nlogging_collector = on\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\nlog_truncate_on_rotation = on\nlog_rotation_age = 1d\nlog_rotation_size = 0\nlog_min_duration_statement = 1000\nlog_checkpoints = on\nlog_line_prefix = '%t %p %c-%l %x %q(%u, %d, %r, %a) '\nlog_lock_waits = on\ntrack_functions = none\ntrack_io_timing = on\ntrack_activity_query_size = 4096\nautovacuum = on\ndatestyle = 'iso, dmy'\ntimezone = 'Europe/Moscow'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'ru_RU.UTF-8'\nlc_numeric = 'ru_RU.UTF-8'\nlc_time = 'ru_RU.UTF-8'\ndefault_text_search_config = 'pg_catalog.russian'\nshared_preload_libraries = 'pg_stat_statements'\npg_stat_statements.max = 1000\npg_stat_statements.track = all\nmax_locks_per_transaction = 264\n\nThanks.\n\nHello.\nExist 2 identical server DELL PowerEdge™ R720, CPU Dual Intel® Xeon® E5-2620 Hexa-Core inkl, RAM 256Gb, RAID-10 8 x 600 GB SAS 6 Gb/s 15000 rpm.\n$ cat /etc/fedora-release \nFedora release 19\n$ postgres --versionpostgres (PostgreSQL) 9.2.4\nData ~220Gb and Indexes ~140Gb\niowait ~0.2-0.5. Disk usage only write ~0-2 Mb/sec\nOn each installed pg_bouncer. Pool size 24.\nOn Master in peak load ~1200 request/sec, ~30 ms/request avg, 24 CPU ~95% - this is no problem\n$ perf top 21,14%  [kernel]                 [k] isolate_freepages_block\n 12,27%  [unknown]                [.] 0x00007fc1bb303be0  5,93%  postgres                 [.] hash_search_with_hash_value\n  4,85%  libbz2.so.1.0.6          [.] 0x000000000000a6e0  2,70%  postgres                 [.] PinBuffer\n  2,34%  postgres                 [.] slot_deform_tuple  1,92%  libbz2.so.1.0.6          [.] BZ2_compressBlock\n  1,85%  postgres                 [.] LWLockAcquire  1,69%  postgres                 [.] heap_page_prune_opt\n  1,48%  postgres                 [.] _bt_checkkeys  1,40%  [kernel]                 [k] page_fault\n  1,36%  postgres                 [.] _bt_compare  1,23%  postgres                 [.] heap_hot_search_buffer\n  1,19%  [kernel]                 [k] get_pageblock_flags_group\n  1,18%  postgres                 [.] AllocSetAlloc\nOn Slave max ~400-500 request/sec, ~200 and up 24 ms/request avg, 24 CPU ~95% - this is problem$ perf top\n 30,10%  postgres               [.] s_lock 22,90%  [unknown]              [.] 0x0000000000729cfe\n  4,98%  postgres               [.] RecoveryInProgress.part.9\n  4,89%  postgres               [.] LWLockAcquire  4,57%  postgres               [.] hash_search_with_hash_value\n  3,50%  postgres               [.] PinBuffer  2,31%  postgres               [.] heap_page_prune_opt\n  2,27%  postgres               [.] LWLockRelease  1,18%  postgres               [.] heap_hot_search_buffer\n  1,03%  postgres               [.] AllocSetAlloc...\nSlave at a much lower load than the master hangs on the function s_lock. What can be done about it?\nOn Slave ~300 request/sec, ~5-8 ms/request avg, cpu usage ~20% - normal work by small load\n$ perf top 10,74%  postgres               [.] hash_search_with_hash_value\n  4,94%  postgres               [.] PinBuffer  4,61%  postgres               [.] AllocSetAlloc\n  3,57%  postgres               [.] heap_page_prune_opt  3,24%  postgres               [.] LWLockAcquire\n  2,47%  postgres               [.] heap_hot_search_buffer\n  2,11%  postgres               [.] SearchCatCache  1,90%  postgres               [.] LWLockRelease\n  1,87%  postgres               [.] _bt_compare  1,68%  postgres               [.] FunctionCall2Coll\n  1,46%  postgres               [.] _bt_checkkeys  1,24%  postgres               [.] copyObject\n  1,15%  postgres               [.] RecoveryInProgress.part.9\n  1,05%  postgres               [.] slot_deform_tuple...\nConfiguration Master postgres.conf\nlisten_addresses = '*'max_connections = 100\nshared_buffers = 200GBwork_mem = 20MB\nmaintenance_work_mem = 2GBeffective_io_concurrency = 4\nwal_level = hot_standbyfsync = on\nsynchronous_commit = offfull_page_writes = on\nwal_writer_delay = 200mscheckpoint_segments = 100\ncheckpoint_timeout = 15mincheckpoint_completion_target = 0.9\narchive_mode = onarchive_command = 'pbzip2 -f -c %p > /opt/pg/wals/wals/%f.bz2'\nmax_wal_senders = 3random_page_cost = 0.5\ncpu_tuple_cost = 0.02cpu_index_tuple_cost = 0.01\ncpu_operator_cost = 0.005effective_cache_size = 40GB\ndefault_statistics_target = 300logging_collector = on\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'log_truncate_on_rotation = on\nlog_rotation_age = 1dlog_rotation_size = 0\nlog_min_duration_statement = 1000log_checkpoints = on\nlog_line_prefix = '%t %p %c-%l %x %q(%u, %d, %r, %a) '\nlog_lock_waits = ontrack_io_timing = on\ntrack_activity_query_size = 4096autovacuum = on\nlog_autovacuum_min_duration = 0autovacuum_freeze_max_age = 1500000000\ndatestyle = 'iso, dmy'timezone = 'Europe/Moscow'\nlc_messages = 'en_US.UTF-8'lc_monetary = 'ru_RU.UTF-8'\nlc_numeric = 'ru_RU.UTF-8'lc_time = 'ru_RU.UTF-8'\ndefault_text_search_config = 'pg_catalog.russian'shared_preload_libraries = 'pg_stat_statements'\npg_stat_statements.max = 1000pg_stat_statements.track = all\nmax_locks_per_transaction = 264\nConfiguration Slave postgres.conflisten_addresses = '*'\nmax_connections = 100shared_buffers = 200GB\nwork_mem = 20MBmaintenance_work_mem = 2GB\neffective_io_concurrency = 4wal_level = hot_standby\nfsync = onsynchronous_commit = off\nfull_page_writes = onwal_writer_delay = 200ms\ncommit_delay = 1000commit_siblings = 2\ncheckpoint_segments = 100checkpoint_timeout = 15min\ncheckpoint_completion_target = 0.9archive_mode = on\narchive_command = 'pbzip2 -f -c %p > /opt/pg/wals/wals/%f.bz2'\nmax_wal_senders = 4hot_standby = on\nmax_standby_archive_delay = 30smax_standby_streaming_delay = 30s\nhot_standby_feedback = onrandom_page_cost = 0.5\ncpu_tuple_cost = 0.02cpu_index_tuple_cost = 0.01\ncpu_operator_cost = 0.005effective_cache_size = 40GB\ndefault_statistics_target = 300logging_collector = on\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'log_truncate_on_rotation = on\nlog_rotation_age = 1dlog_rotation_size = 0\nlog_min_duration_statement = 1000log_checkpoints = on\nlog_line_prefix = '%t %p %c-%l %x %q(%u, %d, %r, %a) '\nlog_lock_waits = ontrack_functions = none\ntrack_io_timing = ontrack_activity_query_size = 4096\nautovacuum = ondatestyle = 'iso, dmy'\ntimezone = 'Europe/Moscow'lc_messages = 'en_US.UTF-8'\nlc_monetary = 'ru_RU.UTF-8'lc_numeric = 'ru_RU.UTF-8'\nlc_time = 'ru_RU.UTF-8'default_text_search_config = 'pg_catalog.russian'\nshared_preload_libraries = 'pg_stat_statements'pg_stat_statements.max = 1000\npg_stat_statements.track = allmax_locks_per_transaction = 264\nThanks.", "msg_date": "Tue, 27 Aug 2013 13:57:20 +0600", "msg_from": "=?KOI8-R?B?5M3J1NLJyiDkxcfU0dKj1w==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Tue, Aug 27, 2013 at 2:57 AM, Дмитрий Дегтярёв <[email protected]> wrote:\n> Hello.\n>\n> Exist 2 identical server DELL PowerEdge™ R720, CPU Dual Intel® Xeon® E5-2620\n> Hexa-Core inkl, RAM 256Gb, RAID-10 8 x 600 GB SAS 6 Gb/s 15000 rpm.\n>\n> $ cat /etc/fedora-release\n> Fedora release 19\n>\n> $ postgres --version\n> postgres (PostgreSQL) 9.2.4\n>\n> Data ~220Gb and Indexes ~140Gb\n>\n> iowait ~0.2-0.5. Disk usage only write ~0-2 Mb/sec\n>\n> On each installed pg_bouncer. Pool size 24.\n>\n> On Master in peak load ~1200 request/sec, ~30 ms/request avg, 24 CPU ~95% -\n> this is no problem\n> $ perf top\n> 21,14% [kernel] [k] isolate_freepages_block\n> 12,27% [unknown] [.] 0x00007fc1bb303be0\n> 5,93% postgres [.] hash_search_with_hash_value\n> 4,85% libbz2.so.1.0.6 [.] 0x000000000000a6e0\n> 2,70% postgres [.] PinBuffer\n> 2,34% postgres [.] slot_deform_tuple\n> 1,92% libbz2.so.1.0.6 [.] BZ2_compressBlock\n> 1,85% postgres [.] LWLockAcquire\n> 1,69% postgres [.] heap_page_prune_opt\n> 1,48% postgres [.] _bt_checkkeys\n> 1,40% [kernel] [k] page_fault\n> 1,36% postgres [.] _bt_compare\n> 1,23% postgres [.] heap_hot_search_buffer\n> 1,19% [kernel] [k] get_pageblock_flags_group\n> 1,18% postgres [.] AllocSetAlloc\n>\n> On Slave max ~400-500 request/sec, ~200 and up 24 ms/request avg, 24 CPU\n> ~95% - this is problem\n> $ perf top\n> 30,10% postgres [.] s_lock\n> 22,90% [unknown] [.] 0x0000000000729cfe\n> 4,98% postgres [.] RecoveryInProgress.part.9\n> 4,89% postgres [.] LWLockAcquire\n> 4,57% postgres [.] hash_search_with_hash_value\n> 3,50% postgres [.] PinBuffer\n> 2,31% postgres [.] heap_page_prune_opt\n\n\nIt looks like you're hitting spinlock connection inside\nheap_page_prune_opt(). Which is commented:\n * Note: this is called quite often. It's important that it fall out quickly\n * if there's not any use in pruning.\n\nThis in turn calls RecoveryInProgress() which spinlocks in order to\nget a guaranteed result. At that call site, we are told:\n/*\n* We can't write WAL in recovery mode, so there's no point trying to\n* clean the page. The master will likely issue a cleaning WAL record soon\n* anyway, so this is no particular loss.\n*/\n\nSo ISTM it's necessary to pedantically check RecoveryInProgress on\neach and every call of this routine (or at least, we should be able to\nreduce the number of spinlocks).\n\nHm, what if we exposed LocalRecoveryInProgress() through a function\nwhich would approximately satisfy the condition\n\"MightRecoveryInProgress()\" in the basis the condition only moves in\none direction? That could lead to optimization around the spinlock in\nhot path cases like this where getting 'TRUE' incorrectly is mostly\nharmless...\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Aug 2013 08:23:07 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Tue, Aug 27, 2013 at 8:23 AM, Merlin Moncure <[email protected]> wrote:\n> It looks like you're hitting spinlock connection inside\n> heap_page_prune_opt(). Which is commented:\n> * Note: this is called quite often. It's important that it fall out quickly\n> * if there's not any use in pruning.\n>\n> This in turn calls RecoveryInProgress() which spinlocks in order to\n> get a guaranteed result. At that call site, we are told:\n> /*\n> * We can't write WAL in recovery mode, so there's no point trying to\n> * clean the page. The master will likely issue a cleaning WAL record soon\n> * anyway, so this is no particular loss.\n> */\n>\n> So ISTM it's necessary to pedantically check RecoveryInProgress on\n> each and every call of this routine (or at least, we should be able to\n> reduce the number of spinlocks).\n>\n> Hm, what if we exposed LocalRecoveryInProgress() through a function\n> which would approximately satisfy the condition\n> \"MightRecoveryInProgress()\" in the basis the condition only moves in\n> one direction? That could lead to optimization around the spinlock in\n> hot path cases like this where getting 'TRUE' incorrectly is mostly\n> harmless...\n\nMore specifically, this hypothetical routine would query\nxlogctl->SharedRecoveryInProgress without taking a lock and would not\nissue InitXLOGAccess(). RecoveryInProgress() seems to be called\neverywhere (In particular: StartTransaction()) so I don't think\nthere's a lot of risk in terms of losing access to the xlog.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Aug 2013 08:38:54 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Tue, Aug 27, 2013 at 8:38 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Aug 27, 2013 at 8:23 AM, Merlin Moncure <[email protected]> wrote:\n>> It looks like you're hitting spinlock connection inside\n>> heap_page_prune_opt(). Which is commented:\n>> * Note: this is called quite often. It's important that it fall out quickly\n>> * if there's not any use in pruning.\n>>\n>> This in turn calls RecoveryInProgress() which spinlocks in order to\n>> get a guaranteed result. At that call site, we are told:\n>> /*\n>> * We can't write WAL in recovery mode, so there's no point trying to\n>> * clean the page. The master will likely issue a cleaning WAL record soon\n>> * anyway, so this is no particular loss.\n>> */\n>>\n>> So ISTM it's necessary to pedantically check RecoveryInProgress on\n>> each and every call of this routine (or at least, we should be able to\n>> reduce the number of spinlocks).\n>>\n>> Hm, what if we exposed LocalRecoveryInProgress() through a function\n>> which would approximately satisfy the condition\n>> \"MightRecoveryInProgress()\" in the basis the condition only moves in\n>> one direction? That could lead to optimization around the spinlock in\n>> hot path cases like this where getting 'TRUE' incorrectly is mostly\n>> harmless...\n>\n> More specifically, this hypothetical routine would query\n> xlogctl->SharedRecoveryInProgress without taking a lock and would not\n> issue InitXLOGAccess(). RecoveryInProgress() seems to be called\n> everywhere (In particular: StartTransaction()) so I don't think\n> there's a lot of risk in terms of losing access to the xlog.\n\nSomething like the attached. Note, this patch is for research\npurposes only and should *not* be applied to your production\nenvironment.\n\nmerlin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 27 Aug 2013 09:12:57 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Tue, Aug 27, 2013 at 9:12 AM, Merlin Moncure <[email protected]> wrote:\n> Something like the attached. Note, this patch is for research\n> purposes only and should *not* be applied to your production\n> environment.\n\nHere is a revised version that is missing the spurious whitespace edit.\n\nmerlin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 27 Aug 2013 09:57:38 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On 2013-08-27 09:57:38 -0500, Merlin Moncure wrote:\n> + bool\n> + RecoveryMightBeInProgress(void)\n> + {\n> + \t/*\n> + \t * We check shared state each time only until we leave recovery mode. We\n> + \t * can't re-enter recovery, so there's no need to keep checking after the\n> + \t * shared variable has once been seen false.\n> + \t */\n> + \tif (!LocalRecoveryInProgress)\n> + \t\treturn false;\n> + \telse\n> + \t{\n> + \t\t/* use volatile pointer to prevent code rearrangement */\n> + \t\tvolatile XLogCtlData *xlogctl = XLogCtl;\n> + \n> + \t\t/* Intentionally query xlogctl without spinlocking! */\n> + \t\tLocalRecoveryInProgress = xlogctl->SharedRecoveryInProgress;\n> + \n> + \t\treturn LocalRecoveryInProgress;\n> + \t}\n> + }\n\nI don't think it's acceptable to *set* LocalRecoveryInProgress\nhere. That should only be done in the normal routine.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Aug 2013 17:55:56 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Tue, Aug 27, 2013 at 10:55 AM, Andres Freund <[email protected]> wrote:\n> On 2013-08-27 09:57:38 -0500, Merlin Moncure wrote:\n>> + bool\n>> + RecoveryMightBeInProgress(void)\n>> + {\n>> + /*\n>> + * We check shared state each time only until we leave recovery mode. We\n>> + * can't re-enter recovery, so there's no need to keep checking after the\n>> + * shared variable has once been seen false.\n>> + */\n>> + if (!LocalRecoveryInProgress)\n>> + return false;\n>> + else\n>> + {\n>> + /* use volatile pointer to prevent code rearrangement */\n>> + volatile XLogCtlData *xlogctl = XLogCtl;\n>> +\n>> + /* Intentionally query xlogctl without spinlocking! */\n>> + LocalRecoveryInProgress = xlogctl->SharedRecoveryInProgress;\n>> +\n>> + return LocalRecoveryInProgress;\n>> + }\n>> + }\n>\n> I don't think it's acceptable to *set* LocalRecoveryInProgress\n> here. That should only be done in the normal routine.\n\nquite right -- that was a major error -- you could bypass the\ninitialization call to the xlog with some bad luck.\n\nmerlin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 27 Aug 2013 12:17:55 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "Hello.\n\nWe have not been able to reproduce this problem on a test servers. Use this\npatch to production servers do not dare.\n\nIn the course of studying the problems we have identified that many queries\nare executed on the slave several times slower. On master function\nheap_hot_search_buffer execute 100 cycles, on the slave the same query with\nthe same plan function heap_hot_search_buffer execute 2000 cycles.\nAlso, we were able to reproduce the problem on the master and detect that\nthere s_lock of slow queries.\n\nWe have solved this problem. A large number of queries used 4 frequently\nchanging index. In these indexes, 99% of the dead tuples. Autovacuum and\neven VACUUM FULL these tuples can not be removed because of\nautovacuum_freeze_max_age.\n\nWe've added cron that 2-3 times a day, performs CREATE INDEX CONCURRENTLY\nidx_name_new; DROP INDEX CONCURRENTLY idx_name; ALTER INDEX idx_name_new\nRENAME TO idx_name; for this 4 indexes.\n\nAs a result s_lock not exists in listed perf top.\n\n\n2013/8/29 Merlin Moncure <[email protected]>\n\n>\n> so -- are you in a position where you might be able to test this patch?\n>\n> merlin\n\nHello.We have not been able to reproduce this problem on a test servers. Use this patch to production servers do not dare.\nIn the course of studying the problems we have identified that many queries are executed on the slave several times slower. On master function heap_hot_search_buffer execute 100 cycles, on the slave the same query with the same plan function heap_hot_search_buffer execute 2000 cycles.\nAlso, we were able to reproduce the problem on the master and detect that there s_lock of slow queries. We have solved this problem. A large number of queries used 4 frequently changing index. In these indexes, 99% of the dead tuples. Autovacuum and even VACUUM FULL these tuples can not be removed because of autovacuum_freeze_max_age.\nWe've added cron that 2-3 times a day, performs CREATE INDEX CONCURRENTLY idx_name_new; DROP INDEX CONCURRENTLY idx_name; ALTER INDEX idx_name_new RENAME TO idx_name; for this 4 indexes.\nAs a result s_lock not exists in listed perf top.\n2013/8/29 Merlin Moncure <[email protected]>\nso -- are you in a position where you might be able to test this patch?merlin", "msg_date": "Tue, 17 Sep 2013 17:55:01 +0600", "msg_from": "=?KOI8-R?B?5M3J1NLJyiDkxcfU0dKj1w==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "Hi,\n\nOn 2013-09-17 17:55:01 +0600, Дмитрий Дегтярёв wrote:\n> We have not been able to reproduce this problem on a test servers. Use this\n> patch to production servers do not dare.\n> \n> In the course of studying the problems we have identified that many queries\n> are executed on the slave several times slower. On master function\n> heap_hot_search_buffer execute 100 cycles, on the slave the same query with\n> the same plan function heap_hot_search_buffer execute 2000 cycles.\n> Also, we were able to reproduce the problem on the master and detect that\n> there s_lock of slow queries.\n\nWhat you describe is normally an indication that you have too many\nlongrunning transactions around preventing hot pruning from working.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Sep 2013 13:59:26 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Tue, Sep 17, 2013 at 6:59 AM, Andres Freund <[email protected]> wrote:\n> Hi,\n>\n> On 2013-09-17 17:55:01 +0600, Дмитрий Дегтярёв wrote:\n>> We have not been able to reproduce this problem on a test servers. Use this\n>> patch to production servers do not dare.\n>>\n>> In the course of studying the problems we have identified that many queries\n>> are executed on the slave several times slower. On master function\n>> heap_hot_search_buffer execute 100 cycles, on the slave the same query with\n>> the same plan function heap_hot_search_buffer execute 2000 cycles.\n>> Also, we were able to reproduce the problem on the master and detect that\n>> there s_lock of slow queries.\n>\n> What you describe is normally an indication that you have too many\n> longrunning transactions around preventing hot pruning from working.\n\nDo you think it's worth submitting the lock avoidance patch for formal review?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Sep 2013 08:18:54 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On 2013-09-17 08:18:54 -0500, Merlin Moncure wrote:\n> On Tue, Sep 17, 2013 at 6:59 AM, Andres Freund <[email protected]> wrote:\n> > Hi,\n> >\n> > On 2013-09-17 17:55:01 +0600, Дмитрий Дегтярёв wrote:\n> >> We have not been able to reproduce this problem on a test servers. Use this\n> >> patch to production servers do not dare.\n> >>\n> >> In the course of studying the problems we have identified that many queries\n> >> are executed on the slave several times slower. On master function\n> >> heap_hot_search_buffer execute 100 cycles, on the slave the same query with\n> >> the same plan function heap_hot_search_buffer execute 2000 cycles.\n> >> Also, we were able to reproduce the problem on the master and detect that\n> >> there s_lock of slow queries.\n> >\n> > What you describe is normally an indication that you have too many\n> > longrunning transactions around preventing hot pruning from working.\n> \n> Do you think it's worth submitting the lock avoidance patch for formal review?\n\nYou mean the bufmgr.c thing? Generally I think that that code needs a\ngood of scalability work - there's a whole thread about it\nsomewhere. But TBH the theories you've voiced about the issues you've\nseen haven't convinced me so far.\n\nIf you can manage to prove it has a benefit in some case that's\nreproducable - why not go ahead?\n\nQuick question: Do you happen to have pg_locks output from back then\naround? We've recently found servers going into somewhat similar\nslowdowns because they exhausted the fastpath locks which made lwlocks\nfar more expensive and made s_lock go up very high in the profle.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Sep 2013 15:24:39 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Tue, Sep 17, 2013 at 8:24 AM, Andres Freund <[email protected]> wrote:\n> On 2013-09-17 08:18:54 -0500, Merlin Moncure wrote:\n>> Do you think it's worth submitting the lock avoidance patch for formal review?\n>\n> You mean the bufmgr.c thing? Generally I think that that code needs a\n> good of scalability work - there's a whole thread about it\n> somewhere. But TBH the theories you've voiced about the issues you've\n> seen haven't convinced me so far.\n\ner, no (but I share your skepticism -- my challenge right now is to\ndemonstrate measurable benefit which so far I've been unable to do).\nI was talking about the patch on *this* thread which bypasses the\ns_lock in RecoveryInProgress() :-).\n\n> Quick question: Do you happen to have pg_locks output from back then\n> around? We've recently found servers going into somewhat similar\n> slowdowns because they exhausted the fastpath locks which made lwlocks\n> far more expensive and made s_lock go up very high in the profle.\n\nI do. Unfortunately I don't have profile info. Not sure how useful\nit is -- I'll send it off-list.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Sep 2013 08:32:30 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On 2013-09-17 08:32:30 -0500, Merlin Moncure wrote:\n> On Tue, Sep 17, 2013 at 8:24 AM, Andres Freund <[email protected]> wrote:\n> > On 2013-09-17 08:18:54 -0500, Merlin Moncure wrote:\n> >> Do you think it's worth submitting the lock avoidance patch for formal review?\n> >\n> > You mean the bufmgr.c thing? Generally I think that that code needs a\n> > good of scalability work - there's a whole thread about it\n> > somewhere. But TBH the theories you've voiced about the issues you've\n> > seen haven't convinced me so far.\n> \n> er, no (but I share your skepticism -- my challenge right now is to\n> demonstrate measurable benefit which so far I've been unable to do).\n> I was talking about the patch on *this* thread which bypasses the\n> s_lock in RecoveryInProgress() :-).\n\nAh, yes. Sorry confused issues ;). Yes, I think that'd made sense.\n\n> > Quick question: Do you happen to have pg_locks output from back then\n> > around? We've recently found servers going into somewhat similar\n> > slowdowns because they exhausted the fastpath locks which made lwlocks\n> > far more expensive and made s_lock go up very high in the profle.\n> \n> I do. Unfortunately I don't have profile info. Not sure how useful\n> it is -- I'll send it off-list.\n\nGreat.\n\nThe primary thing I'd like to know is whether there are lots of\nnon-fastpath locks...\n\nIf you ever get into the situation I mistakenly referred to again, I'd\nstrongly suggest recompling postgres with -fno-omit-frame-pointer. That\nmakes hierarchical profiles actually useful which can help tremendously\nwith diagnosing issues like this...\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Sep 2013 15:35:46 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Tue, Sep 17, 2013 at 8:35 AM, Andres Freund <[email protected]> wrote:\n> On 2013-09-17 08:32:30 -0500, Merlin Moncure wrote:\n>> On Tue, Sep 17, 2013 at 8:24 AM, Andres Freund <[email protected]> wrote:\n>> > On 2013-09-17 08:18:54 -0500, Merlin Moncure wrote:\n>> >> Do you think it's worth submitting the lock avoidance patch for formal review?\n>> >\n>> > You mean the bufmgr.c thing? Generally I think that that code needs a\n>> > good of scalability work - there's a whole thread about it\n>> > somewhere. But TBH the theories you've voiced about the issues you've\n>> > seen haven't convinced me so far.\n>>\n>> er, no (but I share your skepticism -- my challenge right now is to\n>> demonstrate measurable benefit which so far I've been unable to do).\n>> I was talking about the patch on *this* thread which bypasses the\n>> s_lock in RecoveryInProgress() :-).\n>\n> Ah, yes. Sorry confused issues ;). Yes, I think that'd made sense.\n>\n>> > Quick question: Do you happen to have pg_locks output from back then\n>> > around? We've recently found servers going into somewhat similar\n>> > slowdowns because they exhausted the fastpath locks which made lwlocks\n>> > far more expensive and made s_lock go up very high in the profle.\n>>\n>> I do. Unfortunately I don't have profile info. Not sure how useful\n>> it is -- I'll send it off-list.\n>\n> Great.\n>\n> The primary thing I'd like to know is whether there are lots of\n> non-fastpath locks...\n>\n> If you ever get into the situation I mistakenly referred to again, I'd\n> strongly suggest recompling postgres with -fno-omit-frame-pointer. That\n> makes hierarchical profiles actually useful which can help tremendously\n> with diagnosing issues like this...\n\nWe may get an opportunity to do that. I'm curious enough about the\nTHP compaction issues that Kevin mentioned to to maybe consider\ncranking buffers again. If I do that, it will be with strict\ninstructions to the site operators to catch a profile before taking\nfurther action.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Sep 2013 08:40:23 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On 2013-09-17 08:40:23 -0500, Merlin Moncure wrote:\n> > If you ever get into the situation I mistakenly referred to again, I'd\n> > strongly suggest recompling postgres with -fno-omit-frame-pointer. That\n> > makes hierarchical profiles actually useful which can help tremendously\n> > with diagnosing issues like this...\n> \n> We may get an opportunity to do that. I'm curious enough about the\n> THP compaction issues that Kevin mentioned to to maybe consider\n> cranking buffers again. If I do that, it will be with strict\n> instructions to the site operators to catch a profile before taking\n> further action.\n\nThe THP issues should be very clearly diagnosable because a good part of\nthe time will be spent in the kernel. Lots of spinlocking there, but the\nfunction names are easily discernible from pg's code.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 17 Sep 2013 15:43:48 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On 2013-08-27 12:17:55 -0500, Merlin Moncure wrote:\n> On Tue, Aug 27, 2013 at 10:55 AM, Andres Freund <[email protected]> wrote:\n> > On 2013-08-27 09:57:38 -0500, Merlin Moncure wrote:\n> >> + bool\n> >> + RecoveryMightBeInProgress(void)\n> >> + {\n> >> + /*\n> >> + * We check shared state each time only until we leave recovery mode. We\n> >> + * can't re-enter recovery, so there's no need to keep checking after the\n> >> + * shared variable has once been seen false.\n> >> + */\n> >> + if (!LocalRecoveryInProgress)\n> >> + return false;\n> >> + else\n> >> + {\n> >> + /* use volatile pointer to prevent code rearrangement */\n> >> + volatile XLogCtlData *xlogctl = XLogCtl;\n> >> +\n> >> + /* Intentionally query xlogctl without spinlocking! */\n> >> + LocalRecoveryInProgress = xlogctl->SharedRecoveryInProgress;\n> >> +\n> >> + return LocalRecoveryInProgress;\n> >> + }\n> >> + }\n> >\n> > I don't think it's acceptable to *set* LocalRecoveryInProgress\n> > here. That should only be done in the normal routine.\n> \n> quite right -- that was a major error -- you could bypass the\n> initialization call to the xlog with some bad luck.\n\nI've seen this in profiles since, so I'd appreciate pushing this\nforward.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 Sep 2013 01:08:11 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Thu, Sep 26, 2013 at 6:08 PM, Andres Freund <[email protected]> wrote:\n> On 2013-08-27 12:17:55 -0500, Merlin Moncure wrote:\n>> On Tue, Aug 27, 2013 at 10:55 AM, Andres Freund <[email protected]> wrote:\n>> > On 2013-08-27 09:57:38 -0500, Merlin Moncure wrote:\n>> >> + bool\n>> >> + RecoveryMightBeInProgress(void)\n>> >> + {\n>> >> + /*\n>> >> + * We check shared state each time only until we leave recovery mode. We\n>> >> + * can't re-enter recovery, so there's no need to keep checking after the\n>> >> + * shared variable has once been seen false.\n>> >> + */\n>> >> + if (!LocalRecoveryInProgress)\n>> >> + return false;\n>> >> + else\n>> >> + {\n>> >> + /* use volatile pointer to prevent code rearrangement */\n>> >> + volatile XLogCtlData *xlogctl = XLogCtl;\n>> >> +\n>> >> + /* Intentionally query xlogctl without spinlocking! */\n>> >> + LocalRecoveryInProgress = xlogctl->SharedRecoveryInProgress;\n>> >> +\n>> >> + return LocalRecoveryInProgress;\n>> >> + }\n>> >> + }\n>> >\n>> > I don't think it's acceptable to *set* LocalRecoveryInProgress\n>> > here. That should only be done in the normal routine.\n>>\n>> quite right -- that was a major error -- you could bypass the\n>> initialization call to the xlog with some bad luck.\n>\n> I've seen this in profiles since, so I'd appreciate pushing this\n> forward.\n\nroger that -- will push ahead when I get into the office...\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Sep 2013 22:14:10 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." }, { "msg_contents": "On Thu, Sep 26, 2013 at 10:14 PM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Sep 26, 2013 at 6:08 PM, Andres Freund <[email protected]> wrote:\n>> On 2013-08-27 12:17:55 -0500, Merlin Moncure wrote:\n>>> On Tue, Aug 27, 2013 at 10:55 AM, Andres Freund <[email protected]> wrote:\n>>> > On 2013-08-27 09:57:38 -0500, Merlin Moncure wrote:\n>>> >> + bool\n>>> >> + RecoveryMightBeInProgress(void)\n>>> >> + {\n>>> >> + /*\n>>> >> + * We check shared state each time only until we leave recovery mode. We\n>>> >> + * can't re-enter recovery, so there's no need to keep checking after the\n>>> >> + * shared variable has once been seen false.\n>>> >> + */\n>>> >> + if (!LocalRecoveryInProgress)\n>>> >> + return false;\n>>> >> + else\n>>> >> + {\n>>> >> + /* use volatile pointer to prevent code rearrangement */\n>>> >> + volatile XLogCtlData *xlogctl = XLogCtl;\n>>> >> +\n>>> >> + /* Intentionally query xlogctl without spinlocking! */\n>>> >> + LocalRecoveryInProgress = xlogctl->SharedRecoveryInProgress;\n>>> >> +\n>>> >> + return LocalRecoveryInProgress;\n>>> >> + }\n>>> >> + }\n>>> >\n>>> > I don't think it's acceptable to *set* LocalRecoveryInProgress\n>>> > here. That should only be done in the normal routine.\n>>>\n>>> quite right -- that was a major error -- you could bypass the\n>>> initialization call to the xlog with some bad luck.\n>>\n>> I've seen this in profiles since, so I'd appreciate pushing this\n>> forward.\n>\n> roger that -- will push ahead when I get into the office...\n\nattached is new version fixing some comment typos.\n\nmerlin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 27 Sep 2013 07:56:45 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cpu usage 100% on slave. s_lock problem." } ]
[ { "msg_contents": "Can anyone offer suggestions on how I can optimize a query that contains the LIMIT OFFSET clause?The explain plan of the query is included in the notepad attachment.thanks\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 28 Aug 2013 13:39:46 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Poor OFFSET performance in PostgreSQL 9.1.6" }, { "msg_contents": "Two solutions come to mind. First possibility is table partitioning on the\ncolumn you're sorting. Second, depending on your application, is to use a\ncursor. Cursor won't help with web applications however a stateful\napplication could benefit.\n\nHTH\n-Greg\n\n\nOn Wed, Aug 28, 2013 at 2:39 PM, <[email protected]> wrote:\n\n> Can anyone offer suggestions on how I can optimize a query that contains\n> the LIMIT OFFSET clause?\n>\n> The explain plan of the query is included in the notepad attachment.\n>\n> thanks\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nTwo solutions come to mind.  First possibility is table partitioning on the column you're sorting.  Second, depending on your application, is to use a cursor.  Cursor won't help with web applications however a stateful application could benefit.\nHTH-GregOn Wed, Aug 28, 2013 at 2:39 PM, <[email protected]> wrote:\nCan anyone offer suggestions on how I can optimize a query that contains the LIMIT OFFSET clause?\nThe explain plan of the query is included in the notepad attachment.thanks\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 28 Aug 2013 15:26:07 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor OFFSET performance in PostgreSQL 9.1.6" }, { "msg_contents": "On Wed, Aug 28, 2013 at 3:39 PM, <[email protected]> wrote:\n> Can anyone offer suggestions on how I can optimize a query that contains the\n> LIMIT OFFSET clause?\n>\n> The explain plan of the query is included in the notepad attachment.\n\nOFFSET is working as designed (that is, slowly). Managing pagination\nwith OFFSET is essentially a hack and will not scale to even medium\nsized tables. You have some SQL alternatives. One is cursors as\ngreg mentioned. Another is client side pagination:\n\nSelect * from labor_task_report this_\ninner join labor_tasks labor1_ on this_.labor_UID=20178\norder by\n labor1_START_TIME asc,\n this_.work_DATE_TIME asc,\n this_.work_UID asc,\n this_.task_REPORT_UID\nlimit 10000 offset 940000;\n\ncould become\n\nSelect * from labor_task_report this_\ninner join labor_tasks labor1_ on this_.labor_UID=20178\nwhere (\n labor1_START_TIME,\n this_.work_DATE_TIME asc,\n this_.work_UID asc,\n this_.task_REPORT_UID) >\n ($1, $2, $3, $4)\norder by\n labor1_START_TIME asc,\n this_.work_DATE_TIME asc,\n this_.work_UID asc,\n this_.task_REPORT_UID\nlimit 10000;\n\n\nwhere $1-$4 are the corresponding fields of the last row you read from\nthe last fetch.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Aug 2013 18:10:37 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor OFFSET performance in PostgreSQL 9.1.6" }, { "msg_contents": "On Thu, Aug 29, 2013 at 8:39 AM, <[email protected]> wrote:\n\n> Can anyone offer suggestions on how I can optimize a query that contains\n> the LIMIT OFFSET clause?\n>\n> The explain plan of the query is included in the notepad attachment.\n>\n> thanks\n>\n>\nBefore I write anything, I should warn that it has been a while since I had\nto read an explain analyze without also having a schema design to back it\nup, so I may have this wrong.\n\nSince the sort must be applied before the limit and the join must take\nplace before the sort, it's naturally pretty slow given the number of rows\nbeing processed.\n\nIt's hard to tell how the tables are constrained by looking at the explain\nanalyze. It seems that labor_uid is the primary key for the labor_tasks\ntable... (going by the 1 record that is returned. If so, then it's a bit\nconfusing why you'd need to sort on labor1.start_time at all, since only 1\nrow exists anyway...\n\nI got that from here:\n\nIndex Scan using corporate_labor_pkey on labor_tasks labor1_\n(cost=0.00..4.27 rows=1 width=954) (actual time=0.017..0.020 rows=1\nloops=1)\n Index Cond: (labor_uid = 20178)\n\n\nSo this is the case then you could do all of your sorting before the\njoin takes place by using a sub query. This would also mean you could\ndo your LIMIT and OFFSET in the same sub query. If the tables in the\nsub query were properly indexed then this could be a a pretty fast\noperation which would only hit less than 1 million rows. I'm not quite\nsure how your partitioning is setup though as the names seem to\nindicate both dates and labour UIDs... so I'll just ignore these for\nnow...\n\nThe query I came up with looked like this:\n\n\n SELECT *\n\n FROM labor_tasks labor_1\n\n INNER JOIN (SELECT *\n\n FROM labor_task_report this_\n\n WHERE labor_UID = 20178\n\n ORDER BY work_DATE_TIME asc, work_UID asc, task_REPORT_UID\n\n LIMIT 10000 OFFSET 940000\n\n) this_ ON labor_1.labor_uid = this_.labor_uid\n\nORDER BY this_.work_DATE_TIME asc, this.work_UID asc, this.task_REPORT_UID\n\nIf you had an index on labor_UID, work_date_time, work_UID,\ntask_Report_uid then inner would have a better chance at running\nquickly.\n\n\nThough I may have mis-read the explain analyze results and have it\ncompletely wrong.\n\nPostgresql is a pretty amazing feat of engineering, but it does lack\nsometimes when it comes to properly optimising queries to the most\noptimal way by detecting functional dependencies in tables to speed up\njoins and grouping.\n\nRegards\n\nDavid\n\n\n\n\n\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nOn Thu, Aug 29, 2013 at 8:39 AM, <[email protected]> wrote:\n\nCan anyone offer suggestions on how I can optimize a query that contains the LIMIT OFFSET clause?The explain plan of the query is included in the notepad attachment.thanks\nBefore I write anything, I should warn that it has been a while since I had to read an explain analyze without also having a schema design to back it up, so I may have this wrong.\nSince the sort must be applied before the limit and the join must take place before the sort, it's naturally pretty slow given the number of rows being processed.It's hard to tell how the tables are constrained by looking at the explain analyze. It seems that labor_uid is the primary key for the labor_tasks table... (going by the 1 record that is returned. If so, then it's a bit confusing why you'd need to sort on labor1.start_time at all, since only 1 row exists anyway...\nI got that from here:Index Scan using corporate_labor_pkey on labor_tasks labor1_ (cost=0.00..4.27 rows=1 width=954) (actual time=0.017..0.020 rows=1 loops=1) \n Index Cond: (labor_uid = 20178)So this is the case then you could do all of your sorting before the join takes place by using a sub query. This would also mean you could do your LIMIT and OFFSET in the same sub query. If the tables in the sub query were properly indexed then this could be a a pretty fast operation which would only hit less than 1 million rows. I'm not quite sure how your partitioning is setup though as the names seem to indicate both dates and labour UIDs... so I'll just ignore these for now...\nThe query I came up with looked like this:\n SELECT * \n FROM labor_tasks labor_1 INNER JOIN (SELECT *\n             FROM labor_task_report this_             WHERE labor_UID = 20178\n             ORDER BY work_DATE_TIME asc, work_UID asc, task_REPORT_UID              LIMIT 10000 OFFSET 940000\n) this_ ON labor_1.labor_uid = this_.labor_uidORDER BY this_.work_DATE_TIME asc, this.work_UID asc, this.task_REPORT_UID \nIf you had an index on labor_UID, work_date_time, work_UID, task_Report_uid then inner would have a better chance at running quickly.\nThough I may have mis-read the explain analyze results and have it completely wrong.\nPostgresql is a pretty amazing feat of engineering, but it does lack sometimes when it comes to properly optimising queries to the most optimal way by detecting functional dependencies in tables to speed up joins and grouping.\nRegardsDavid\n \n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 29 Aug 2013 16:43:50 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor OFFSET performance in PostgreSQL 9.1.6" }, { "msg_contents": "On Wed, Aug 28, 2013 at 01:39:46PM -0700, [email protected] wrote:\n> Can anyone offer suggestions on how I can optimize a query that contains the LIMIT OFFSET clause?\n> The explain plan of the query is included in the notepad attachment.\n> thanks\n\nlarge offsets are slow, and there is no real escape from it.\n\nYou can change your paging solution, though, to something that will be\nfaster.\n\nPossible solutions/optimizations:\n\nhttp://www.depesz.com/2007/08/29/better-results-paging-in-postgresql-82/\n\nor\n\nhttp://www.depesz.com/2011/05/20/pagination-with-fixed-order/\n\nBest regards,\n\ndepesz\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 13:00:22 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor OFFSET performance in PostgreSQL 9.1.6" } ]
[ { "msg_contents": "Hi Greg,The labor_task_report table is already Partitioned by this_.work_date_time and this table contains approx. 15 billion rows. The other table labor_tasks is not partitioned. I'm thinking that the size of the external sort is part of the problem. if I remove the labor_tasks table from the SQL, the query returns in 10 sec. Could there be a postgresql.conf parameter that I could tweak to provide additional sorting resources to improve the overall query?Unfortunately this query is being generated by Hibernate 4.1.6, so the cursor solution won't help I don;t think.thanks \n\n\n-------- Original Message --------\nSubject: Re: [PERFORM] Poor OFFSET performance in PostgreSQL 9.1.6\nFrom: Greg Spiegelberg <[email protected]>\nDate: Wed, August 28, 2013 2:26 pm\nTo: [email protected]\nCc: pgsql-performance <[email protected]>\n\nTwo solutions come to mind.  First possibility is table partitioning on the column you're sorting.  Second, depending on your application, is to use a cursor.  Cursor won't help with web applications however a stateful application could benefit. HTH-GregOn Wed, Aug 28, 2013 at 2:39 PM, <[email protected]> wrote: Can anyone offer suggestions on how I can optimize a query that contains the LIMIT OFFSET clause? The explain plan of the query is included in the notepad attachment.thanks -- Sent via pgsql-performance mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance \n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 28 Aug 2013 15:08:16 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor OFFSET performance in PostgreSQL 9.1.6" } ]
[ { "msg_contents": "\nI am *expecting 1000+ hits to my PostgreSQL DB* and I doubt my standalone DB\nwill be able to handle it.\n\nSo I want to *scale out by adding more servers to share the load*. For this,\nI want to do clustering.\n\nI am *curious to know how clustering works in PostgreSQL.* (I don't want to\nknow how to setup cluster - as of now. Just want to know how clustering\nworks).\n\nWhen I look at some of the content available while googling, I am getting\nmore and more confused, as I find that in most of the sites, clustering is\nused interchangeably with replication.\n\n*My purpose is scale out to handle more load, not high availability.*\n\nCan any one please help me with the details or guide me to use urls. \n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-works-in-PostgreSQL-tp5768917.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 05:14:03 -0700 (PDT)", "msg_from": "bsreejithin <[email protected]>", "msg_from_op": true, "msg_subject": "How clustering for scale out works in PostgreSQL" }, { "msg_contents": "On 29/08/13 13:14, bsreejithin wrote:\n>\n> I am *expecting 1000+ hits to my PostgreSQL DB* and I doubt my standalone DB\n> will be able to handle it.\n\nOMG! 1000 hits every year! And \"hits\" too - not just any type of \nquery!!!! :-)\n\nSeriously, if you try describing your setup, what queries make up your \n\"hits\" and what you mean by 1000 then there are people on this list who \ncan tell you what sort of setup you'll need.\n\nWhile you're away googling though, \"replication\" is indeed the term you \nwant. In particular \"hot standby\" which lets you run read-only queries \non the replicas.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 15:59:05 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "\nOn 08/29/2013 07:59 AM, Richard Huxton wrote:\n>\n> On 29/08/13 13:14, bsreejithin wrote:\n>>\n>> I am *expecting 1000+ hits to my PostgreSQL DB* and I doubt my\n>> standalone DB\n>> will be able to handle it.\n\nWe are going to need a little more detail here. In a normal environment \n1000+ \"hits\" isn't that much, even if the hit is generating a dozen \nqueries per page.\n\nA more appropriate action would be to consider the amount of transaction \nper second and the type of queries the machine will be doing. You will \nwant to look into replication, hot standby as well as read only scaling \nwith pgpool-II.\n\n\n>\n> OMG! 1000 hits every year! And \"hits\" too - not just any type of\n> query!!!! :-)\n>\n> Seriously, if you try describing your setup, what queries make up your\n> \"hits\" and what you mean by 1000 then there are people on this list who\n> can tell you what sort of setup you'll need.\n>\n> While you're away googling though, \"replication\" is indeed the term you\n> want. In particular \"hot standby\" which lets you run read-only queries\n> on the replicas.\n\nSarcasm with new recruits to the community is not the way to go.\n\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nFor my dreams of your image that blossoms\n a rose in the deeps of my heart. - W.B. Yeats\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 08:08:13 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "Thanks a lot Joshua and others who have responded..\n\nI am sorry about not putting in more details in my initial post.\n\nWhat I posted is about a new setup that's going to come up..Discussions are\non whether to setup DB cluster to handle 1000 concurrent users.\n\nDB cluster was thought of because of the following reasons : \nA performance testing of the product was done and it was found that the DB\nside utilizations were on the higher side with 125 concurrent\nusers.Application server was 4 Core 12GB RAM , DB server was 4 Core 12GB\nRAM.\n\nPostgreSQL version was* 8.2*. Also, I noted that most of the column data\ntypes were declared as bigint, [quite unnecessarily]. ---I am trying to\nput-in all details here.\n\nThe product team is working on to migrate to version 9.2 and to look at\npossible areas where bigint data type can be changed to smaller data types.\n\nThe product owner wants to scale up to 1000 concurrent users. So one of the\ndiscussions was to setup up Application level clustering and probably DB\nlevel clustering to share the load (We don't need fail-over or HA here).\n\nFor application clustering what we thought is : to have EIGHT 4 Core 12GB\nmachines.\nCurrently for DB, the new server we have is : 8 Core 32 GB RAM.\n\nMany here are having doubts whether the DB server will be able to handle\n1000 concurrent user connections coming-in from the Application cluster\nnodes.\n\nI know that, this is still a generalised questions - but this is the\nscenario we have here..\nI am putting the question in this forum so that I could get as much opinions\nas possible before we decide on what to do next.\n\nTo those replying - thanks alot - any help will be very useful..\n\nAnd the sarcasm is ok, as long as I get to learn :)\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-works-in-PostgreSQL-tp5768917p5768951.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 09:13:24 -0700 (PDT)", "msg_from": "bsreejithin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "On Thu, Aug 29, 2013 at 11:13 AM, bsreejithin <[email protected]> wrote:\n\n> Thanks a lot Joshua and others who have responded..\n>\n> I am sorry about not putting in more details in my initial post.\n>\n> What I posted is about a new setup that's going to come up..Discussions are\n> on whether to setup DB cluster to handle 1000 concurrent users.\n>\n\nOk. That's a start. Can you tell us more about what these users are\ndoing? What kind of queries are being issued to the database? How often\n(per user or total per time)?\n\n__________________________________________________________________________________\n*Mike Blackwell | Technical Analyst, Distribution Services/Rollout\nManagement | RR Donnelley*\n1750 Wallace Ave | St Charles, IL 60174-3401\nOffice: 630.313.7818\[email protected]\nhttp://www.rrdonnelley.com\n\n\n<http://www.rrdonnelley.com/>\n* <[email protected]>*\n\nOn Thu, Aug 29, 2013 at 11:13 AM, bsreejithin <[email protected]> wrote:\nThanks a lot Joshua and others who have responded..\n\nI am sorry about not putting in more details in my initial post.\n\nWhat I posted is about a new setup that's going to come up..Discussions are\non whether to setup DB cluster to handle 1000 concurrent users.Ok.  That's a start.  Can you tell us more about what these users are doing?  What kind of queries are being issued to the database?  How often (per user or total per time)?   \n__________________________________________________________________________________Mike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 Office: 630.313.7818 [email protected]://www.rrdonnelley.com", "msg_date": "Thu, 29 Aug 2013 11:21:28 -0500", "msg_from": "Mike Blackwell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "The performance test that was conducted was for 1 Hour. \n\nThere are 6 transactions. 2 DB inserts and 4 SELECTs.\nEvery 2 minutes there will be 4 SELECTs. And every 3 minutes there will be 2\nDB inserts.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-works-in-PostgreSQL-tp5768917p5768957.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 09:42:15 -0700 (PDT)", "msg_from": "bsreejithin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "\nOn 08/29/2013 09:42 AM, bsreejithin wrote:\n>\n> The performance test that was conducted was for 1 Hour.\n>\n> There are 6 transactions. 2 DB inserts and 4 SELECTs.\n> Every 2 minutes there will be 4 SELECTs. And every 3 minutes there will be 2\n> DB inserts.\n\nThis shouldn't be a problem with proper hardware and a connection \npooler. The concern isn't the 1000 sessions, it is the creating and \ndestroying in rapid succession of 1000 connections. A connection pooler \nwill resolve that issue.\n\nSincerely,\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nFor my dreams of your image that blossoms\n a rose in the deeps of my heart. - W.B. Yeats\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 09:46:06 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of bsreejithin\n> Sent: Thursday, August 29, 2013 12:42 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] How clustering for scale out works in PostgreSQL\n> \n> The performance test that was conducted was for 1 Hour.\n> \n> There are 6 transactions. 2 DB inserts and 4 SELECTs.\n> Every 2 minutes there will be 4 SELECTs. And every 3 minutes there will be 2\n> DB inserts.\n> \n> \n> \n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-\n> works-in-PostgreSQL-tp5768917p5768957.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n> \n> \n\nWith that kind of activity, you don't need clustering for your 1000 users.\nWhat you need is PgBouncer, it should solv your problem. Please read some docs on PgBouncer, it's \"light-weight\" and very easy to setup.\n\nRegards,\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 16:49:56 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "Ok Igor..Will check out PgBouncer..Thanks a lot.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-works-in-PostgreSQL-tp5768917p5768960.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 09:52:44 -0700 (PDT)", "msg_from": "bsreejithin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "Thanks Joshua..Will look to use connection pooler which Igor mentioned..\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-works-in-PostgreSQL-tp5768917p5768961.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 09:54:36 -0700 (PDT)", "msg_from": "bsreejithin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "bsreejithin wrote on 29.08.2013 18:13:\n> PostgreSQL version was* 8.2*.\n\n8.2 has long been deprecated.\n\nFor a new system you should use 9.2 (or at least 9.1)\n\nThomas\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Aug 2013 19:45:22 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "Ya..sure...Migration to 9.2 is one of the activities planned and in fact\nit's already on track.Thanks Thomas\n\n\nOn Thu, Aug 29, 2013 at 11:16 PM, Thomas Kellerer [via PostgreSQL] <\[email protected]> wrote:\n\n> bsreejithin wrote on 29.08.2013 18:13:\n> > PostgreSQL version was* 8.2*.\n>\n> 8.2 has long been deprecated.\n>\n> For a new system you should use 9.2 (or at least 9.1)\n>\n> Thomas\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([hidden email]<http://user/SendEmail.jtp?type=node&node=5768973&i=0>)\n>\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> ------------------------------\n> If you reply to this email, your message will be added to the discussion\n> below:\n>\n> http://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-works-in-PostgreSQL-tp5768917p5768973.html\n> To unsubscribe from How clustering for scale out works in PostgreSQL, click\n> here<http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5768917&code=YnNyZWVqaXRoaW5AZ21haWwuY29tfDU3Njg5MTd8MTYxODQyODgxOA==>\n> .\n> NAML<http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>\n>\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-works-in-PostgreSQL-tp5768917p5768974.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nYa..sure...Migration to 9.2 is one of the activities planned and in fact it's already on track.Thanks ThomasOn Thu, Aug 29, 2013 at 11:16 PM, Thomas Kellerer [via PostgreSQL] <[hidden email]> wrote:\n\n\n\tbsreejithin wrote on 29.08.2013 18:13:\n> PostgreSQL version was* 8.2*.\n8.2 has long been deprecated.\nFor a new system you should use 9.2 (or at least 9.1)\nThomas\n-- \nSent via pgsql-performance mailing list ([hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/How-clustering-for-scale-out-works-in-PostgreSQL-tp5768917p5768973.html\n\n\n\t\t\n\t\tTo unsubscribe from How clustering for scale out works in PostgreSQL, click here.\nNAML\n\n\nView this message in context: Re: How clustering for scale out works in PostgreSQL\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.", "msg_date": "Thu, 29 Aug 2013 10:48:16 -0700 (PDT)", "msg_from": "bsreejithin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "bsreejithin <[email protected]> wrote:\n\n> What I posted is about a new setup that's going to come\n> up..Discussions are on whether to setup DB cluster to handle 1000\n> concurrent users.\n\nI previously worked for Wisconsin Courts, where we had a single\nserver which handled about 3000 web users collectively generating\nhundreds of web hits per second generating thousands of queries per\nsecond, while at the same time functioning as a replication target\nfrom 80 sources sending about 20 transactions per second which\nmodified data (many having a large number of DML statements per\ntransaction) against a 3 TB database.  The same machine also hosted\na transaction repository for all modifications to the database,\nindexed for audit reports and ad hoc queries; that was another 3\nTB.  Each of these was running on a 40-drive RAID.\n\nShortly before I left we upgraded from a machine with 16 cores and\n256 GB RAM to one with 32 cores and 512 GB RAM, because there is\nconstant growth in both database size and load.  Performance was\nstill good on the smaller machine, but monitoring showed we were\napproaching saturation.  We had started to see some performance\ndegradation on the old machine, but were able to buy time by\nreducing the size of the web connection pool (in the Java\napplication code) from 65 to 35.  Testing different connection pool\nsizes showed that pool size to be optimal for our workload on that\nmachine; your ideal pool size can only be determined through\ntesting.\n\nYou can poke around in this application here, if you like:\nhttp://wcca.wicourts.gov/\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 31 Aug 2013 07:44:01 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "On Thu, Aug 29, 2013 at 6:14 AM, bsreejithin <[email protected]> wrote:\n>\n> I am *expecting 1000+ hits to my PostgreSQL DB* and I doubt my standalone DB\n> will be able to handle it.\n>\n> So I want to *scale out by adding more servers to share the load*. For this,\n> I want to do clustering.\n>\n> I am *curious to know how clustering works in PostgreSQL.* (I don't want to\n> know how to setup cluster - as of now. Just want to know how clustering\n> works).\n>\n> When I look at some of the content available while googling, I am getting\n> more and more confused, as I find that in most of the sites, clustering is\n> used interchangeably with replication.\n>\n> *My purpose is scale out to handle more load, not high availability.*\n>\n> Can any one please help me with the details or guide me to use urls.\n\nWhat you are doing is called capacity planning, and it's a vital step\nbefore deploying an app and the servers to support it.\n\nLook at several things:\nHow many WRITEs do you need to make a second.\nHow many READs do you need to make a second.\nHow big will your data set be.\nHow many clients you'll have concurrently.\n\nYour first post pretty much just has how many concurrent users. Later\nposts had read and writes but didn't specify if that's per user or in\ntotal. I'm guessing per user. Either way the total load you listed was\npretty small. So yeah, the pgbouncer pooling solution looks optimal.\nBut you might want to look at how big your data set is, how fast it\nwill grow, and what kind of indexes it'll need for good performance as\nwell. If your data set is likely to get REALLY big then that's another\nissue to tackle as well.\n\nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Sep 2013 19:38:21 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "On 08/30/2013 01:48 AM, bsreejithin wrote:\n> Ya..sure...Migration to 9.2 is one of the activities planned and in fact\n> it's already on track.Thanks Thomas\n\nYou'll want to re-do your performance testing; a huge amount has changed\nsince 8.2.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 02 Sep 2013 15:08:57 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "bsreejithin wrote:\n> \n> I am *expecting 1000+ hits to my PostgreSQL DB* and I doubt my standalone DB\n> will be able to handle it.\n> \n> So I want to *scale out by adding more servers to share the load*. For this,\n> I want to do clustering.\n>\n> DB server was 4 Core 12GB RAM.\n\nYou're jumping way ahead here. You have a medium sized server that\nshould effortlessly handle most loads if its I/O subsystem is up to it.\n\nIt's a good idea to plan for what you'll do as load grows, but it's not\nnecessary to jump straight to engineering some \"web scale\" monstrosity\nif you don't have to.\n\n> The performance test that was conducted was for 1 Hour. \n\n> There are 6 transactions. 2 DB inserts and 4 SELECTs.\n> Every 2 minutes there will be 4 SELECTs. And every 3 minutes there will be 2\n> DB inserts.\n\nIt's not possible to give useful specific advice without knowing what\nthe \"selects\" and \"updates\" you're dealing with are. After all, a\nsingle-tuple update of a non-indexed field with no trigger side-effects\nwill be way sub-millisecond. On the other hand, a big update over a\nmulti-table join that causes updates on several multi-column indexes /\nGIN indexes / etc, a cascade update, etc, might take hours.\n\nYou need to work out what the actual load is. Determine whether you're\nbottlenecked on disk reads, disk writes, disk flushes (fsync), CPU, etc.\n\nAsk some basic tuning questions. Does your DB fit in RAM? Do at least\nthe major indexes and \"hot\" smaller tables fit in RAM? Is\neffective_cache_size set to tell the query planner that.\n\nLook at the query plans. Is there anything grossly unreasonable? Do you\nneed to adjust any tuning params (random_page_cost, etc)? Is\neffective_cache_size set appropriately for the server? Figure out\nwhether there are any indexes that're worth creating that won't make the\nwrite load too much worse.\n\nFind the point where throughput stops scaling up with load on the\nserver. Put a connection pooler in place and limit concurrent working\nconnections to PostgreSQL to about that level; overall performance will\nbe greatly improved by not trying to do too much all at once.\n\n> I am *curious to know how clustering works in PostgreSQL.* (I don't want to\n> know how to setup cluster - as of now. Just want to know how clustering\n> works).\n\nThe \"clustering\" available in PostgreSQL is a variety of forms of\nreplication.\n\nIt is important to understand that attempting to scale out to\nmulti-server setups requires significant changes to many applications.\nThere is no transparent multi-master clustering for PostgreSQL.\n\nIf you're on a single server, you can rely on the strict rules\nPostgreSQL follows for traffic isolation. It will ensure that two\nupdates can't conflict with row-level locking. In SERIALIZABLE isolation\nit'll protect against a variety of concurrency problems.\n\nMost of that goes away when you go multi-server. If you're using a\nsingle master and multiple read-replicas you have to deal with lags,\nwhere the replicas haven't yet seen / replayed transactions performed on\nthe master. So you might UPDATE a row in one transaction, only to find\nthat when you SELECT it the update isn't there ... then it suddenly\nappears when you SELECT again. Additionally, long-running queries on the\nread-only replicas can be aborted to allow the replica to continue\nreplaying changes from the master.\n\nYou can work around that one with synchronous replication, but then you\ncreate another set of performance challenges on the master.\n\nThere are also a variety of logical / row-level replication options.\nThey have their own trade-offs in terms of impact on master performance,\ntransaction consistency, etc.\n\nIt only gets more \"fun\" when you want multiple masters, where you can\nwrite to more than one server. Don't go there unless you have to.\n\n> When I look at some of the content available while googling, I am getting\n> more and more confused, as I find that in most of the sites, clustering is\n> used interchangeably with replication.\n\nWell, a cluster of replicas is still a cluster.\n\nIf you mean \"transparent multi-master clustering\", well that's another\nthing entirely.\n\nI strongly recommend you go back to basics. Evaluate the capacity of the\nserver you've got, update PostgreSQL, characterize the load, do some\nbasic tuning, benchmark based on a simulation of your load, get a\nconnection pooler in place, do some basic query pattern and plan\nanalysis, etc.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 02 Sep 2013 15:46:06 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "On 8/31/13 9:44 AM, Kevin Grittner wrote:\n> bsreejithin <[email protected]> wrote:\n>\n>> What I posted is about a new setup that's going to come\n>> up..Discussions are on whether to setup DB cluster to handle 1000\n>> concurrent users.\n>\n> I previously worked for Wisconsin Courts, where we had a single\n> server which handled about 3000 web users collectively generating\n> hundreds of web hits per second generating thousands of queries per\n> second, while at the same time functioning as a replication target\n> from 80 sources sending about 20 transactions per second which\n> modified data (many having a large number of DML statements per\n> transaction) against a 3 TB database. The same machine also hosted\n> a transaction repository for all modifications to the database,\n> indexed for audit reports and ad hoc queries; that was another 3\n> TB. Each of these was running on a 40-drive RAID.\n>\n> Shortly before I left we upgraded from a machine with 16 cores and\n> 256 GB RAM to one with 32 cores and 512 GB RAM, because there is\n> constant growth in both database size and load. Performance was\n> still good on the smaller machine, but monitoring showed we were\n> approaching saturation. We had started to see some performance\n> degradation on the old machine, but were able to buy time by\n> reducing the size of the web connection pool (in the Java\n> application code) from 65 to 35. Testing different connection pool\n> sizes showed that pool size to be optimal for our workload on that\n> machine; your ideal pool size can only be determined through\n> testing.\n>\n> You can poke around in this application here, if you like:\n> http://wcca.wicourts.gov/\n\nJust to add another data point...\n\nWe run multiple ~2TB databases that see an average workload of ~700 transactions per second with peaks well above 4000 TPS. This is on servers with 512G of memory and varying numbers of cores.\n\nWe probably wouldn't need such beefy hardware for this, except our IO performance (seen by the server) is pretty pathetic, there's some flaws in the data model (that I inherited), and Rails likes to do some things that are patently stupid. Were it not for those issues we could probably get by with 256G or even less.\n\nGranted, the servers we're running on cost around $30k a pop and there's a SAN behind them. But by the time you get to that kind of volume you should be able to afford good hardware... if not you should be rethinking your business model! ;)\n\nIf you setup some form of replication it's very easy to move to larger servers as you grow. I'm sure that when Kevin moved their database it was a complete non-event.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Sep 2013 10:18:58 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" }, { "msg_contents": "Jim Nasby <[email protected]> wrote:\n\n> If you setup some form of replication it's very easy to move to\n> larger servers as you grow. I'm sure that when Kevin moved their\n> database it was a complete non-event.\n\nYeah, replication was turned on for the new server in addition to\nthe old one.  When everything was ready the web application\nconfiguration was updates so that it started using the new server\nfor new requests and disconnected from the old server as requests\ncompleted.  Zero down time.  No user-visible impact, other than\nthings ran a little faster because of the better hardware.\n\nOne generation of old hardware is kept in replication for running\nad hoc queries and to provide availability in case the new one\ncrashes.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Sep 2013 11:46:20 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How clustering for scale out works in PostgreSQL" } ]
[ { "msg_contents": "Hi all,\n\nI've started using views in an attempt to try and simplify some of my\nmore complex reporting and running into some snags and I'm trying to\nfigure out how Postgres decides to join data with my view.\n\nJust to present a simplified representation of what I am trying to\naccomplish the following would match pretty well with what I'm doing.\nAt the center I have a table called \"jobs\" that holds information about\njobs clients have requested.\nRelated to my jobs are a number of different tables that each hold\ndifferent types of costs.\nThere is a simple table called \"directcosts\" that simply holds single\nitems that represent a cost on that job\nThere is a more complex invoice/detail structure for invoices received\nby suppliers who did things in relation to the job\nAnd a few more structures like that.\n\nWhat I currently do when I need to report on the total costs per job,\nand which works very well, is the following:\n----\nselect jobid, jobdesc, sum(cost)\nfrom (\n select jobid, jobdesc, dcamount as cost\n from jobs\n join directcosts on jobid = dcjobid\n where <some filter for my jobs>\nunion all\n select jobid, jobdesc, detamount as cost\n from jobs\n join invoiceheader on jobid = invjobid\n join invoicedetail on detinvid = invid \n where <some filter for my jobs>\n) totalcosts\ngroup by jobid, jobdesc\n----\n\nWork well enough.. But as I'm using the same data in different reports\nand I though a view might be smart. So I created a view:\n----\ncreate view v_costs as\n select dcjobid as costjobid, sum(dcamount) as costamount\n from directcosts\n group by dcjobid\nunion all\n select invjobid as costjobid, sum(detamount) as costamount from\ninvoiceheader\n join finvoicedetail on detinvid = invid\n group by invjobid\n----\n\nAnd rewrote my report to:\n----\nselect jobid, jobdesc, sum(costamount)\nfrom jobs\njoin v_costs on costjobid = jobid\nwhere <some filter for my jobs>\ngroup by jobid, jobdesc\n----\n\nNow what I was hoping for was that postgres would start at my jobs\ntable, find the records I'm trying to report on and then index scan on\nthe related tables and start aggregating the amounts.\nWhat it seems to do is to first execute the view to get totals for all\nthe jobs in the database and join that result set with my 2 or 3 jobs\nthat match my filter.\n\nWhat is it about my view that prevents postgres to effectively use it?\nThe group bys? the union?\n\nCheers,\n\nBastiaan Olij\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Aug 2013 12:22:15 +1000", "msg_from": "Bastiaan Olij <[email protected]>", "msg_from_op": true, "msg_subject": "Optimising views" }, { "msg_contents": "On 8/29/13 9:22 PM, Bastiaan Olij wrote:\n> Work well enough.. But as I'm using the same data in different reports\n> and I though a view might be smart. So I created a view:\n> ----\n> create view v_costs as\n> select dcjobid as costjobid, sum(dcamount) as costamount\n> from directcosts\n> group by dcjobid\n> union all\n> select invjobid as costjobid, sum(detamount) as costamount from\n> invoiceheader\n> join finvoicedetail on detinvid = invid\n> group by invjobid\n> ----\n>\n> And rewrote my report to:\n> ----\n> select jobid, jobdesc, sum(costamount)\n> from jobs\n> join v_costs on costjobid = jobid\n> where <some filter for my jobs>\n> group by jobid, jobdesc\n> ----\n>\n> Now what I was hoping for was that postgres would start at my jobs\n> table, find the records I'm trying to report on and then index scan on\n> the related tables and start aggregating the amounts.\n> What it seems to do is to first execute the view to get totals for all\n> the jobs in the database and join that result set with my 2 or 3 jobs\n> that match my filter.\n>\n> What is it about my view that prevents postgres to effectively use it?\n> The group bys? the union?\n\nIt's probably either the GROUP BY or the UNION. Try stripping those out one at a time and see if it helps. If it doesn't, please post EXPLAIN ANALYZE (or at least EXPLAIN) output.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Sep 2013 10:10:08 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising views" } ]
[ { "msg_contents": "Hi all,\n\nI'm migrating a web application to an ORM framework (Doctrine) so I need a\nnew way to get statistics about entities into the application without\nimporting all data, only the results (e.g. load total number of children\ninstead of loading all children into the application and counting it\nafterwards). My current solution is to create a view for those statistics\nand map it to an (read-only) entity in my application. This view is joined\nwith the table containing the entities on which I need statistics.\n\nThe entity table is called 'work'. The view containing statistical\ninformation about works is called wps2.\n\nThe problem is that this join is performing very badly when more than one\nwork is involved. It chooses a plan that is orders of magnitude slower.\n\nI have attached\n- The (simplified) table definitions\n- The (simplified) view\n- Two queries with explain analyze plan: \"IN (1)\" => fast, \"IN (1,3)\" =>\nslow\n- postgresql.conf\n\nI do not understand why the planner does not consider the nested loop in\nthe second case, like it does in the first case.\n\nCan anyone help me?\n\nThanks.\n\nKind regards,\nMathieu\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 30 Aug 2013 11:05:46 +0200", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan change with multiple elements in IN clause" }, { "msg_contents": "Mathieu De Zutter <[email protected]> writes:\n> The problem is that this join is performing very badly when more than one\n> work is involved. It chooses a plan that is orders of magnitude slower.\n\n> I have attached\n> - The (simplified) table definitions\n> - The (simplified) view\n> - Two queries with explain analyze plan: \"IN (1)\" => fast, \"IN (1,3)\" =>\n> slow\n> - postgresql.conf\n\nThe reason you get a nice plan in the first case is that \"w.id in (1)\"\nis treated as \"w.id = 1\", and then there is logic that combines that with\n\"w.id = wps.id\" to conclude that we can synthesize a condition \"wps.id = 1\".\nNone of that happens when there's more than one IN item, because it's not\nan equality operator anymore.\n\nYou might be able to do something like\n JOIN (VALUES (1),(3)) foo(x) ON w.id = foo.x\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Aug 2013 10:00:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan change with multiple elements in IN clause" } ]
[ { "msg_contents": "Hi,\nThis is my first post on this group so welcome everyone! Currently I'm working on optimizing a quite simple database used to store events from one website. Every event is a set of data describing user behaviour. The main table that stores all events is built using schema: \n\n Column | Type | Modifiers\n-----------------+-----------------------------+-----------\n id | bigint | not null\n browser | character varying(255) |\n created | timestamp without time zone |\n eventsource | character varying(255) |\n eventtype | character varying(255) |\n ipaddress | character varying(255) |\n objectid | bigint |\n sessionid | character varying(255) |\n shopids | integer[] |\n source | character varying(255) |\n sourceid | bigint |\n supplierid | bigint |\n cookieuuid | uuid |\n serializeddata | bytea |\n devicetype | character varying(255) |\n operatingsystem | character varying(255) |\n\n It was a quick project to play with EclipseLink, Hibernate and some Jersey Rest services, so isn't perfect. However the database became quite usefull and we decided to optimize this table as it grew quite large (128GB right now without indexes, about 630M records). There is only primary key index on this table. Here is the list of changes that I'd like to make to the table (some of them should be done from the scratch): \n\n1. Changing ipaddress from varchar to inet - this should save some space and lower the size of potential index. \n\n2. Changing id for some composite id with created contained in it.\n\n3. And this part is most interesting for me. Columns browser, eventsource, eventtype, devicetype, operatingsystem contain a small pool of strings - for example for devicetype this is set to Computer, Mobile, Tablet or Unknown. Browser is set to normalized browser name. In every case I can store those data using one of 3 different methods: \n\n- store as varchar as it is now - nice and easy, but index on those columns is quite big and I think storing many of similar strings is waste of space. \n\n- store only id's and join external tables as needed, for example for browsers I only need smallint key, as there is a limited number of browsers. The column browser becomes smallint and we have additional table with two columns (id, browser varchar). This should save some space on event table, but if I want name of the browser in some report I need to join tables. Second thing - on every insert there is constraint that is checked for this field and this can affect performance. I was thinking about the same strategy for the remaining fields - this would give me 5 additional tables and 5 additional constraints on event table. Browser table will have about ~100 records, eventtype and eventsource will have about 8-12 records each, devicetype - 4 records, operatingsystem - didn't really check this one, but I think something around 100 like browser. \n\n- introduce enumerator type for each of the column and store those values as enumerator. This one should be the most space efficient, but it will be problematic in case of changing column values like browser or operatingsystem as altering enumerator isn't that simple. \n\nFor browser average text length is 19 characters, for eventsource and eventtype eventsource average text lenght is 24 characters. Database encoding is set to UTF8. \n\nMy question is - what is estimated difference in table size between those 3 variants of storing columns? In theory third one should give me the smallest database and index size but is the most problematic from all of the above. \n\nLukasz Walkowski\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 31 Aug 2013 15:35:58 +0200", "msg_from": "=?utf-8?Q?=C5=81ukasz_Walkowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Varchar vs foreign key vs enumerator - table and index size" }, { "msg_contents": "=?utf-8?Q?=C5=81ukasz_Walkowski?= <[email protected]> writes:\n> 3. And this part is most interesting for me. Columns browser, eventsource, eventtype, devicetype, operatingsystem contain a small pool of strings - for example for devicetype this is set to Computer, Mobile, Tablet or Unknown. Browser is set to normalized browser name. In every case I can store those data using one of 3 different methods: \n\n> - store as varchar as it is now - nice and easy, but index on those columns is quite big and I think storing many of similar strings is waste of space. \n\nIf you're starting to be concerned about space, it's definitely time to\nget away from this choice. Depending on what locale you're using,\ncomparing varchar values can be quite an expensive operation, too.\n\n> - store only id's and join external tables as needed, for example for browsers I only need smallint key, as there is a limited number of browsers.\n\nI think the main \"pro\" of this approach is that it doesn't use any\nnonstandard SQL features, so you preserve your options to move to some\nother database in the future. The main \"con\" is that you'd be buying into\nfairly significant rewriting of your application code, since just about\nevery query involving these columns would have to become a join.\n\nFWIW, I'd be inclined to just use integer not smallint. The space savings\nfrom smallint is frequently illusory because of alignment considerations\n--- for instance, an index on a single smallint column will *not* be any\nsmaller than one on a single int column. And smallint has some minor\nusage annoyances because it's a second-class citizen in the type promotion\nhierarchy --- you may find yourself needing explicit casts to smallint\nhere and there.\n\n> - introduce enumerator type for each of the column and store those values as enumerator. This one should be the most space efficient, but it will be problematic in case of changing column values like browser or operatingsystem as altering enumerator isn't that simple. \n\nSpace-wise this is going to be equivalent to the integer-foreign-key\nsolution. It's much nicer from a notational standpoint, though, because\nyou don't need joins --- it's likely that you'd need few if any\napplication code changes to go this route. (But I'd advise doing some\ntesting to verify that before you take it as a given.)\n\nYou're right though that enums are not a good option if you expect\nfrequent changes in the pool of allowed values. I guess the question\nis how often does that happen, in your application? Adding a new value\nfrom time to time isn't much of a problem unless you want to get picky\nabout how it sorts relative to existing values. But you can't ever delete\nan individual enum value, and we don't support renaming them either.\n(Though if you're desperate, I believe a manual UPDATE on the pg_enum\ncatalog would work for that.)\n\nAnother thing to think about is whether you have auxiliary data about each\nvalue that might usefully be stored as additional columns in the small\ntables. The enum approach doesn't directly handle that, though I suppose\nyou could still create small separate tables that use an enum column as\nprimary key.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 31 Aug 2013 11:20:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index size" }, { "msg_contents": "Tom,\n\n> If you're starting to be concerned about space, it's definitely time to\n> get away from this choice. Depending on what locale you're using,\n> comparing varchar values can be quite an expensive operation, too.\n\nI don't like wasting space and processing power even if more work is required to achieve this. We use pl_PL.UTF-8 as our locale.\n\n> I think the main \"pro\" of this approach is that it doesn't use any\n> nonstandard SQL features, so you preserve your options to move to some\n> other database in the future. The main \"con\" is that you'd be buying into\n> fairly significant rewriting of your application code, since just about\n> every query involving these columns would have to become a join.\n\nWell, I don't really think I will move from Postgresql anytime soon. It's just the best database for me. Rewriting code is one of the things I'm doing right now but before I touch database, I want to be sure that the choices I made are good.\n\n> FWIW, I'd be inclined to just use integer not smallint. The space savings\n> from smallint is frequently illusory because of alignment considerations\n> --- for instance, an index on a single smallint column will *not* be any\n> smaller than one on a single int column. And smallint has some minor\n> usage annoyances because it's a second-class citizen in the type promotion\n> hierarchy --- you may find yourself needing explicit casts to smallint\n> here and there.\n\nOk, thats important information. Thank you.\n\n> \n> Space-wise this is going to be equivalent to the integer-foreign-key\n> solution. It's much nicer from a notational standpoint, though, because\n> you don't need joins --- it's likely that you'd need few if any\n> application code changes to go this route. (But I'd advise doing some\n> testing to verify that before you take it as a given.)\n> \n> You're right though that enums are not a good option if you expect\n> frequent changes in the pool of allowed values. I guess the question\n> is how often does that happen, in your application? Adding a new value\n> from time to time isn't much of a problem unless you want to get picky\n> about how it sorts relative to existing values. But you can't ever delete\n> an individual enum value, and we don't support renaming them either.\n> (Though if you're desperate, I believe a manual UPDATE on the pg_enum\n> catalog would work for that.)\n> \n> Another thing to think about is whether you have auxiliary data about each\n> value that might usefully be stored as additional columns in the small\n> tables. The enum approach doesn't directly handle that, though I suppose\n> you could still create small separate tables that use an enum column as\n> primary key.\n> \n> \t\t\tregards, tom lane\n\nSo, I'll go for enumerators for device type, eventtype and eventsource as those columns are quite stable. For browser and operating system I'll do external tables.\n\nThank you - any additional tips are welcome.\n\nReagards,\nLukasz Walkowski\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 31 Aug 2013 19:06:01 +0200", "msg_from": "=?utf-8?Q?=C5=81ukasz_Walkowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index size" }, { "msg_contents": "On Sat, Aug 31, 2013 at 10:06 AM, Łukasz Walkowski <\[email protected]> wrote:\n\n> > I think the main \"pro\" of this approach is that it doesn't use any\n> > nonstandard SQL features, so you preserve your options to move to some\n> > other database in the future. The main \"con\" is that you'd be buying\n> into\n> > fairly significant rewriting of your application code, since just about\n> > every query involving these columns would have to become a join.\n>\n> Well, I don't really think I will move from Postgresql anytime soon. It's\n> just the best database for me. Rewriting code is one of the things I'm\n> doing right now but before I touch database, I want to be sure that the\n> choices I made are good.\n>\n\nIf your applications are read-heavy and only have a small-ish amount of\ncode that inserts/updates the table, it may not be that much of a rewrite.\nYou can create a integer/varchar table of key/values, use its key to\nreplace the current varchar column, rename the original table, and create a\nview with the original table's name. Code that only reads the data won't\nknow the difference. And it's a portable solution.\n\nI did this and it worked out well. If the key/value pairs table is\nrelatively small, the planner does an excellent job of generating efficient\nqueries against the big table.\n\nCraig\n\nOn Sat, Aug 31, 2013 at 10:06 AM, Łukasz Walkowski <[email protected]> wrote:\n\n> I think the main \"pro\" of this approach is that it doesn't use any\n> nonstandard SQL features, so you preserve your options to move to some\n> other database in the future.  The main \"con\" is that you'd be buying into\n> fairly significant rewriting of your application code, since just about\n> every query involving these columns would have to become a join.\n\nWell, I don't really think I will move from Postgresql anytime soon. It's just the best database for me. Rewriting code is one of the things I'm doing right now but before I touch database, I want to be sure that the choices I made are good.\nIf your applications are read-heavy and only have a small-ish amount of code that inserts/updates the table, it may not be that much of a rewrite. You can create a integer/varchar table of key/values, use its key to replace the current varchar column, rename the original table, and create a view with the original table's name.  Code that only reads the data won't know the difference. And it's a portable solution.\nI did this and it worked out well. If the key/value pairs table is relatively small, the planner does an excellent job of generating efficient queries against the big table.\nCraig", "msg_date": "Sat, 31 Aug 2013 18:31:06 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index size" }, { "msg_contents": "2013/8/31 Łukasz Walkowski <[email protected]>\n\n>\n> 3. And this part is most interesting for me. Columns browser, eventsource,\n> eventtype, devicetype, operatingsystem contain a small pool of strings -\n> for example for devicetype this is set to Computer, Mobile, Tablet or\n> Unknown. Browser is set to normalized browser name. In every case I can\n> store those data using one of 3 different methods:\n>\n>\nWell, there are some more options:\na) Store int keys and do mapping in the application (e.g. with java enums).\nThis can save you a join, that is especially useful if you are going to do\npaged output with limit/offset scenario. Optimizer sometimes produce\nsuboptimal plans for join in offset/limit queries.\nb) Store small varchar values as keys (up to \"char\" type if you really want\nto save space) and do user display mapping in application. It's different\nfrom (a) since it's harder to mess with the mapping and values are still\nmore or less readable with simple select. But it can be less efficient than\n(a).\nc) Do mixed approach with mapping table, loaded on start into application\nmemory. This would be an optimization in case you get into optimizer\ntroubles.\n\nBest regards, Vitalii Tymchyshyn\n\n2013/8/31 Łukasz Walkowski <[email protected]>\n\n3. And this part is most interesting for me. Columns browser, eventsource, eventtype, devicetype, operatingsystem contain a small pool of strings - for example for devicetype this is set to Computer, Mobile, Tablet or Unknown. Browser is set to normalized browser name. In every case I can store those data using one of 3 different methods:\nWell, there are some more options:a) Store int keys and do mapping in the application (e.g. with java enums). This can save you a join, that is especially useful if you are going to do paged output with limit/offset scenario. Optimizer sometimes produce suboptimal plans for join in offset/limit queries.\nb) Store small varchar values as keys (up to \"char\" type if you really want to save space) and do user display mapping in application. It's different from (a) since it's harder to mess with the mapping and values are still more or less readable with simple select. But it can be less efficient than (a).\nc) Do mixed approach with mapping table, loaded on start into application memory. This would be an optimization in case you get into optimizer troubles.Best regards, Vitalii Tymchyshyn", "msg_date": "Sat, 31 Aug 2013 23:10:45 -0400", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index size" }, { "msg_contents": "\nOn 1 wrz 2013, at 03:31, Craig James <[email protected]> wrote:\n\n> If your applications are read-heavy and only have a small-ish amount of code that inserts/updates the table, it may not be that much of a rewrite. You can create a integer/varchar table of key/values, use its key to replace the current varchar column, rename the original table, and create a view with the original table's name. Code that only reads the data won't know the difference. And it's a portable solution.\n> \n> I did this and it worked out well. If the key/value pairs table is relatively small, the planner does an excellent job of generating efficient queries against the big table.\n> \n> Craig\n\nActually this (event) table is write heavy. But the concept is really cool and worth trying. Thanks.\n\n\nLukasz\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 2 Sep 2013 11:47:37 +0200", "msg_from": "=?iso-8859-2?Q?=A3ukasz_Walkowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index size" }, { "msg_contents": "\nOn 1 wrz 2013, at 05:10, Vitalii Tymchyshyn <[email protected]> wrote:\n> \n> \n> Well, there are some more options:\n> a) Store int keys and do mapping in the application (e.g. with java enums). This can save you a join, that is especially useful if you are going to do paged output with limit/offset scenario. Optimizer sometimes produce suboptimal plans for join in offset/limit queries.\n> b) Store small varchar values as keys (up to \"char\" type if you really want to save space) and do user display mapping in application. It's different from (a) since it's harder to mess with the mapping and values are still more or less readable with simple select. But it can be less efficient than (a).\n> c) Do mixed approach with mapping table, loaded on start into application memory. This would be an optimization in case you get into optimizer troubles.\n> \n> Best regards, Vitalii Tymchyshyn\n\nI'd like to leave database in readable form because before I add some new queries and rest endpoints to the application, I test them as ad-hoc queries using command line. So variant a) isn't good for me. Variant b) is worth trying and c) is easy to code, but I still prefer having all this data in database independent of application logic.\n\nThanks for suggestion,\nLukasz\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 2 Sep 2013 11:53:26 +0200", "msg_from": "=?iso-8859-2?Q?=A3ukasz_Walkowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index size" }, { "msg_contents": "\nOn 09/02/2013 05:53 AM, Łukasz Walkowski wrote:\n> On 1 wrz 2013, at 05:10, Vitalii Tymchyshyn <[email protected]> wrote:\n>>\n>> Well, there are some more options:\n>> a) Store int keys and do mapping in the application (e.g. with java enums). This can save you a join, that is especially useful if you are going to do paged output with limit/offset scenario. Optimizer sometimes produce suboptimal plans for join in offset/limit queries.\n>> b) Store small varchar values as keys (up to \"char\" type if you really want to save space) and do user display mapping in application. It's different from (a) since it's harder to mess with the mapping and values are still more or less readable with simple select. But it can be less efficient than (a).\n>> c) Do mixed approach with mapping table, loaded on start into application memory. This would be an optimization in case you get into optimizer troubles.\n>>\n>> Best regards, Vitalii Tymchyshyn\n> I'd like to leave database in readable form because before I add some new queries and rest endpoints to the application, I test them as ad-hoc queries using command line. So variant a) isn't good for me. Variant b) is worth trying and c) is easy to code, but I still prefer having all this data in database independent of application logic.\n>\n\n\nI think the possible use of Postgres enums has been too easily written \noff in this thread. Looking at the original problem description they \nlook like quite a good fit, despite the OP's skepticism. What exactly is \nwanted that can't be done with database enums? You can add new values to \nthe type very simply. You can change the values of existing labels in \nthe type slightly less simply, but still without any great difficulty. \nThings that are hard to do include removing labels in the set and \nchanging the sort order, because those things would require processing \ntables where the type is used, unlike the simple things. But neither of \nthese is required for typical use cases. For most uses of this kind they \nare very efficient both in storage and processing.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 02 Sep 2013 13:02:56 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index\n size" }, { "msg_contents": "Well, in older version of Hibernate it was a little tricky to handle\nPostgresql Enums. Dunno if it's out of the box now.\nAlso adding new value is an explicit operation (much like with lookup\ntable). I've had quite a complex code with second connection opening to\nsupport lookup table filling without flooding original transaction with\nadditional locks that could lead to deadlocks.\nBTW: Does adding new value to enum adds some locks? Can a check if value\nexists and adding new value be done in atomic fashion without grabbing some\nglobal lock?\nP.S. As I see, it can be a topic for good article for, say, dzone. The\nproblem can be quite tricky in MVCC database and choice must be done wisely.\n\nBest regards, Vitalii Tymchyshyn\n\n\n2013/9/2 Andrew Dunstan <[email protected]>\n\n>\n> On 09/02/2013 05:53 AM, Łukasz Walkowski wrote:\n>\n>> On 1 wrz 2013, at 05:10, Vitalii Tymchyshyn <[email protected]> wrote:\n>>\n>>>\n>>> Well, there are some more options:\n>>> a) Store int keys and do mapping in the application (e.g. with java\n>>> enums). This can save you a join, that is especially useful if you are\n>>> going to do paged output with limit/offset scenario. Optimizer sometimes\n>>> produce suboptimal plans for join in offset/limit queries.\n>>> b) Store small varchar values as keys (up to \"char\" type if you really\n>>> want to save space) and do user display mapping in application. It's\n>>> different from (a) since it's harder to mess with the mapping and values\n>>> are still more or less readable with simple select. But it can be less\n>>> efficient than (a).\n>>> c) Do mixed approach with mapping table, loaded on start into\n>>> application memory. This would be an optimization in case you get into\n>>> optimizer troubles.\n>>>\n>>> Best regards, Vitalii Tymchyshyn\n>>>\n>> I'd like to leave database in readable form because before I add some new\n>> queries and rest endpoints to the application, I test them as ad-hoc\n>> queries using command line. So variant a) isn't good for me. Variant b) is\n>> worth trying and c) is easy to code, but I still prefer having all this\n>> data in database independent of application logic.\n>>\n>>\n>\n> I think the possible use of Postgres enums has been too easily written off\n> in this thread. Looking at the original problem description they look like\n> quite a good fit, despite the OP's skepticism. What exactly is wanted that\n> can't be done with database enums? You can add new values to the type very\n> simply. You can change the values of existing labels in the type slightly\n> less simply, but still without any great difficulty. Things that are hard\n> to do include removing labels in the set and changing the sort order,\n> because those things would require processing tables where the type is\n> used, unlike the simple things. But neither of these is required for\n> typical use cases. For most uses of this kind they are very efficient both\n> in storage and processing.\n>\n> cheers\n>\n> andrew\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nWell, in older version of Hibernate it was a little tricky to handle Postgresql Enums. Dunno if it's out of the box now.Also adding new value is an explicit operation (much like with lookup table). I've had quite a complex code with second connection opening to support lookup table filling without flooding original transaction with additional locks that could lead to deadlocks.\nBTW: Does adding new value to enum adds some locks? Can a check if value exists and adding new value be done in atomic fashion without grabbing some global lock?P.S. As  I see, it can be a topic for good article for, say, dzone. The problem can be quite tricky in MVCC database and choice must be done wisely.\nBest regards, Vitalii Tymchyshyn2013/9/2 Andrew Dunstan <[email protected]>\n\nOn 09/02/2013 05:53 AM, Łukasz Walkowski wrote:\n\nOn 1 wrz 2013, at 05:10, Vitalii Tymchyshyn <[email protected]> wrote:\n\n\nWell, there are some more options:\na) Store int keys and do mapping in the application (e.g. with java enums). This can save you a join, that is especially useful if you are going to do paged output with limit/offset scenario. Optimizer sometimes produce suboptimal plans for join in offset/limit queries.\n\nb) Store small varchar values as keys (up to \"char\" type if you really want to save space) and do user display mapping in application. It's different from (a) since it's harder to mess with the mapping and values are still more or less readable with simple select. But it can be less efficient than (a).\n\nc) Do mixed approach with mapping table, loaded on start into application memory. This would be an optimization in case you get into optimizer troubles.\n\nBest regards, Vitalii Tymchyshyn\n\nI'd like to leave database in readable form because before I add some new queries and rest endpoints to the application, I test them as ad-hoc queries using command line. So variant a) isn't good for me. Variant b) is worth trying and c) is easy to code, but I still prefer having all this data in database independent of application logic.\n\n\n\n\nI think the possible use of Postgres enums has been too easily written off in this thread. Looking at the original problem description they look like quite a good fit, despite the OP's skepticism. What exactly is wanted that can't be done with database enums? You can add new values to the type very simply.  You can change the values of existing labels in the type slightly less simply, but still without any great difficulty. Things that are hard to do include removing labels in the set and changing the sort order, because those things would require processing tables where the type is used, unlike the simple things. But neither of these is required for typical use cases. For most uses of this kind they are very efficient both in storage and processing.\n\ncheers\n\nandrew\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Tue, 3 Sep 2013 22:00:53 +0300", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index size" }, { "msg_contents": "On 8/31/13 8:35 AM, Łukasz Walkowski wrote:\n> 3. And this part is most interesting for me. Columns browser, eventsource, eventtype, devicetype, operatingsystem contain a small pool of strings - for example for devicetype this is set to Computer, Mobile, Tablet or Unknown. Browser is set to normalized browser name. In every case I can store those data using one of 3 different methods:\n\nSorry for the late reply... the Enova Tools project in pgFoundry has code that lets you create a \"dynamic lookup table\" that allows for easily normalizing slow-changing data. You could then put a writable view on top of that so that the app wouldn't know the difference. It also has a backfill framework that would help you move data from the old table to the new table.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Sep 2013 10:07:55 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar vs foreign key vs enumerator - table and index\n size" } ]
[ { "msg_contents": "Hi,\n\ndepending on the OFFSET parameter I have seen at least 3 different query\nplans.\n\nSELECT * FROM\n (\n SELECT * FROM transaction tt\n WHERE\n tt.account_id = '1376641'\n AND tt.transaction_time >= '2013-02-03 05:37:24'\n AND tt.transaction_time < '2013-08-23 05:37:24'\n ORDER BY\n tt.transaction_time ASC,\n tt.id ASC\n LIMIT 10000\n OFFSET 0\n ) t1\n LEFT OUTER JOIN fmb t2\n ON (t1.fmb_id = t2.id)\n LEFT OUTER JOIN payment.payment t3\n ON (t1.payment_id = t3.id);\n\nThe best of them is this:\n\n Nested Loop Left Join (cost=1488.34..126055.47 rows=9985 width=1015)\n (actual time=26.894..78.711 rows=10000 loops=1)\n -> Nested Loop Left Join\n (cost=1487.91..86675.47 rows=9985 width=828)\n (actual time=26.892..72.170 rows=10000 loops=1)\n -> Limit (cost=1487.35..1911.50 rows=9985 width=597)\n (actual time=26.873..33.735 rows=10000 loops=1)\n -> Index Scan using xxx on transaction tt\n (cost=0.57..1911.50 rows=44985 width=597)\n (actual time=0.020..31.707 rows=45000 loops=1)\n Index Cond: ((account_id = 1376641::bigint) AND\n (transaction_time >= '...') AND\n (transaction_time < '...'))\n -> Index Scan using pk_fmb on fmb t2\n (cost=0.56..8.47 rows=1 width=231)\n (actual time=0.003..0.003 rows=1 loops=10000)\n Index Cond: (tt.fmb_id = id)\n -> Index Scan using pk_payment on payment t3\n (cost=0.43..3.93 rows=1 width=187)\n (actual time=0.000..0.000 rows=0 loops=10000)\n Index Cond: (tt.payment_id = id)\n Total runtime: 79.219 ms\n\nAnother one is this:\n\n Hash Left Join (cost=55139.59..140453.16 rows=9985 width=1015)\n (actual time=715.450..762.989 rows=10000 loops=1)\n Hash Cond: (tt.payment_id = t3.id)\n -> Nested Loop Left Join\n (cost=1487.91..86675.47 rows=9985 width=828)\n (actual time=27.472..70.723 rows=10000 loops=1)\n -> Limit (cost=1487.35..1911.50 rows=9985 width=597)\n (actual time=27.453..34.066 rows=10000 loops=1)\n -> Index Scan using xxx on transaction tt\n (cost=0.57..1911.50 rows=44985 width=597)\n (actual time=0.076..32.050 rows=45000 loops=1)\n Index Cond: ((account_id = 1376641::bigint) AND\n (transaction_time >= '...') AND\n (transaction_time < '...'))\n -> Index Scan using pk_fmb on fmb t2\n (cost=0.56..8.47 rows=1 width=231)\n (actual time=0.003..0.003 rows=1 loops=10000)\n Index Cond: (tt.fmb_id = id)\n -> Hash (cost=40316.30..40316.30 rows=1066830 width=187)\n (actual time=687.651..687.651 rows=1066830 loops=1)\n Buckets: 131072 Batches: 1 Memory Usage: 235206kB\n -> Seq Scan on payment t3\n (cost=0.00..40316.30 rows=1066830 width=187)\n (actual time=0.004..147.681 rows=1066830 loops=1)\n Total runtime: 781.584 ms\n\n\nYou see this 2nd plan takes 10 times longer.\n\nNow, if I\n\n set enable_seqscan=off;\n\nthe planner generates the 1st plan also for this parameter set and it\nexecutes in about the same time (~80 ms).\n\nThen I created a new tablespace with very low cost settings:\n\n alter tablespace trick_indexes set\n (seq_page_cost=0.0001, random_page_cost=0.0001);\n\nand moved the pk_payment there. The tablespace is located on the same\ndisk. The only reason for it's existence are the differing cost parameters.\n\nNow I could turn enable_seqscan back on and still got the better query plan.\n\n\nIs there an other way to make the planner use generate the 1st plan?\n\nWhy does it generate the 2nd plan at all?\n\nDoes the planner take into account what is currently present in shared\nmemory? If so, it could know that the pk_payment index is probably in\nRAM most of the time.\n\n\nThanks,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 02 Sep 2013 21:57:04 +0200", "msg_from": "=?ISO-8859-1?Q?Torsten_F=F6rtsch?= <[email protected]>", "msg_from_op": true, "msg_subject": "planner parameters" }, { "msg_contents": "Torsten Förtsch <[email protected]> wrote:\n\n> Is there an other way to make the planner use generate the 1st\n> plan?\n\nThe planner cost factors are based on the assumption that a\nmoderate percentage of random page reads will need to actually go\nout to disk.  If a high percentage of pages are in cache, you may\nwant to reduce random_page_cost to something closer to (or even\nequal to) seq_page_cost.  I generally find I get better plans if I\nraise cpu_tuple_cost to 0.03.  effective_cache_size should\ngenerally be between 50% and 75% of machine RAM.  If these changes\n(or others of their ilk) cause costs to be estimated in a way which\nmore nearly matches reality, better plans will be chosen.\n\n> Why does it generate the 2nd plan at all?\n\nIt has the lowest estimated cost, based on your memory\nconfiguration and cost factors.\n\n> Does the planner take into account what is currently present in\n> shared memory?\n\nNo.  If you search the archives you can probably find previous\ndiscussions of whether it would be a good idea to do so; the\nconsensus has been that it would not be.\n\nIf you have further performance-related questions, please review\nthis page so that you can provide enough information to allow\npeople to give the most helpful advice:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Sep 2013 12:02:00 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner parameters" } ]
[ { "msg_contents": "Hi.\n\nI have a strange situation where generating the query plan takes 6s+ and\nexecuting it takes very little time.\n\n2013-09-03 09:19:38.726 db=# explain select table.id from db.table left\njoin db.tablepro on db.id = tablepro.table_id where table.fts @@\nto_tsquery('english','q12345') ;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=43.71..12711.39 rows=2930 width=4)\n -> Bitmap Heap Scan on sequence (cost=43.71..4449.10 rows=2930 width=4)\n Recheck Cond: (fts @@ '''q12345'''::tsquery)\n -> Bitmap Index Scan on table_gin_idx (cost=0.00..42.98\nrows=2930 width=0)\n Index Cond: (fts @@ '''q12345'''::tsquery)\n -> Index Only Scan using tablepro_seqid_idx on tablepro \n(cost=0.00..2.81 rows=1 width=4)\n Index Cond: (tablepro_id = table.id)\n(7 rows)\n\nTime: 10458.404 ms\n\n\nThe query gives 4 rows out of 50.000.000, so the query-plan is actually\ncorrect and as expected.\n\nAny suggestions?\n\nJesper\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Sep 2013 09:46:02 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Slow query-plan generation (fast query) PG 9.2" }, { "msg_contents": "On 09/03/2013 03:46 PM, [email protected] wrote:\n> Hi.\n> \n> I have a strange situation where generating the query plan takes 6s+ and\n> executing it takes very little time.\n\nHow do you determine that it's planning time at fault here?\n\nPlease take separate timing for:\n\nPREPARE testq AS select table.id from db.table left\njoin db.tablepro on db.id = tablepro.table_id where table.fts @@\nto_tsquery('english','q12345') ;\n\nand then:\n\nEXPLAIN ANALYZE EXECUTE testq;\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 03 Sep 2013 15:47:08 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query-plan generation (fast query) PG 9.2" }, { "msg_contents": "On 03/09/13 09:47, Craig Ringer wrote:\n> On 09/03/2013 03:46 PM, [email protected] wrote:\n>> Hi.\n>>\n>> I have a strange situation where generating the query plan takes 6s+ and\n>> executing it takes very little time.\n> How do you determine that it's planning time at fault here?\nNot that I'm sure, but the timing I send were only for \"explain\" not \n\"explain analyze\". The database is constantly updating and at the moment \ni cannot reproduce it any more. But at the time I picked the numbers it \nwere very reproducible.. (tried 10+ times over 15 minutes).\n\n Please take separate timing for: PREPARE testq AS select table.id\n from db.table left join db.tablepro on db.id = tablepro.table_id\n where table.fts @@ to_tsquery('english','q12345') ; and then:\n EXPLAIN ANALYZE EXECUTE testq; \n\nI'll try to do that if i see the problem re-occour. I'm just very \ninterested in what explain then does if it is not only the time for the \nquery plan. When I did try the \"PREPARE / EXECUTE\" dance as you \ndescribed .. i didnt see the prepare state take time, which seems to be \nconsistent with that the planning time is in the EXECUTE step according \nto the documentation.\n\n-- \nJesper\n\n\n\n\n\n\nOn 03/09/13 09:47, Craig Ringer wrote:\n\n\nOn 09/03/2013 03:46 PM, [email protected] wrote:\n\n\nHi.\n\nI have a strange situation where generating the query plan takes 6s+ and\nexecuting it takes very little time.\n\n\n\nHow do you determine that it's planning time at fault here?\n\n\n Not that I'm sure, but the timing I send were only for \"explain\" not\n \"explain analyze\". The database is constantly updating and at the\n moment i cannot reproduce it any more. But at the time I picked the\n numbers it were very reproducible.. (tried 10+ times over 15\n minutes).\n\nPlease take separate timing for:\n PREPARE testq AS select table.id from db.table left\n join db.tablepro on db.id = tablepro.table_id where table.fts @@\n to_tsquery('english','q12345') ;\n and then:\n EXPLAIN ANALYZE EXECUTE testq;\n \n\n\n I'll try to do that if i see the problem re-occour. I'm just very\n interested in what explain then does if it is not only the time for\n the query plan. When I did try the \"PREPARE / EXECUTE\" dance as you\n described .. i didnt see the prepare state take time, which seems to\n be consistent with that the planning time is in the EXECUTE step\n according to the documentation. \n\n -- \n Jesper", "msg_date": "Tue, 03 Sep 2013 21:34:10 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query-plan generation (fast query) PG 9.2" }, { "msg_contents": "On Tue, Sep 3, 2013 at 2:34 PM, Jesper Krogh <[email protected]> wrote:\n> On 03/09/13 09:47, Craig Ringer wrote:\n>\n> On 09/03/2013 03:46 PM, [email protected] wrote:\n>\n> Hi.\n>\n> I have a strange situation where generating the query plan takes 6s+ and\n> executing it takes very little time.\n>\n> How do you determine that it's planning time at fault here?\n>\n> Not that I'm sure, but the timing I send were only for \"explain\" not\n> \"explain analyze\". The database is constantly updating and at the moment i\n> cannot reproduce it any more. But at the time I picked the numbers it were\n> very reproducible.. (tried 10+ times over 15 minutes).\n\nMaybe your explain was blocking on locks?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Sep 2013 08:59:39 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query-plan generation (fast query) PG 9.2" } ]
[ { "msg_contents": "Dear All\n\nI'm running Postgres 8.4 on Ubuntu 10.4 Linux server (64bit)\nI have a big table tath contains product information: during the day we perform a process that import new product continously with statemtn COPY TO .. from files to this table.\n\nAs result the table disk space is growing fast, it seems that postgres is not able to free space for old rows.\n\nIs it possible to run a specific autovacuum acivity or say to postgres \"every time I delete a row, delete it immedialty and don't take care of other transactions\" ?\n\nDo you have any suggestion for me?\n\nI'll appreciate every suggeestion you can provide me.\n\nMany thanks in advance\n\nRoberto\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Sep 2013 18:16:13 +0200 (CEST)", "msg_from": "Roberto Grandi <[email protected]>", "msg_from_op": true, "msg_subject": "COPY TO and VACUUM" }, { "msg_contents": "Roberto Grandi <[email protected]> wrote:\n\n> I'm running Postgres 8.4 on Ubuntu 10.4 Linux server (64bit)\n> I have a big table tath contains product information: during the\n> day we perform a process that import new product continously with\n> statemtn COPY TO .. from files to this table.\n>\n> As result the table disk space is growing fast, it seems that\n> postgres is not able to free space for old rows.\n\nCOPY TO would not free any space.  Is there some other activity you\nhaven't yet mentioned?\n\n> Is it possible to run a specific autovacuum acivity or say to\n> postgres \"every time I delete a row, delete it immedialty and\n> don't take care of other transactions\" ?\n\nYou can configure autovacuum to be more aggressive, or you could\nrun VACUUM statements.\n\n> Do you have any suggestion for me?\n\n8.4 is getting pretty old; there have been a lot of autovacuum\nimprovements in recent years.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Sep 2013 13:34:30 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "Hi kevin\n\nfirst of all thanks for your help. I did a mistake we are using postgres 8.3.\n\nI didn't expect COPY TO frees space but I was wondering Autovacumm delete dead rows as soon as possible, in fact my scenario is:\n\n- Delete all products record for a vendor\n- Reload all products record (from new listing) for the same vendor.\n\nObviously we repeat this process continously and table space is growing really fast.\n\nCan you suggest me an approach for autovacuum within this scenario and, if you want, suggest me an appropriate version of postgres that help solving my problem?\n\nMany thanks in advance again.\n\nBR,\nRoberto\n\n\n\n----- Messaggio originale -----\nDa: \"Kevin Grittner\" <[email protected]>\nA: \"Roberto Grandi\" <[email protected]>, [email protected]\nInviato: Martedì, 3 settembre 2013 22:34:30\nOggetto: Re: [PERFORM] COPY TO and VACUUM\n\nRoberto Grandi <[email protected]> wrote:\n\n> I'm running Postgres 8.4 on Ubuntu 10.4 Linux server (64bit)\n> I have a big table tath contains product information: during the\n> day we perform a process that import new product continously with\n> statemtn COPY TO .. from files to this table.\n>\n> As result the table disk space is growing fast, it seems that\n> postgres is not able to free space for old rows.\n\nCOPY TO would not free any space.  Is there some other activity you\nhaven't yet mentioned?\n\n> Is it possible to run a specific autovacuum acivity or say to\n> postgres \"every time I delete a row, delete it immedialty and\n> don't take care of other transactions\" ?\n\nYou can configure autovacuum to be more aggressive, or you could\nrun VACUUM statements.\n\n> Do you have any suggestion for me?\n\n8.4 is getting pretty old; there have been a lot of autovacuum\nimprovements in recent years.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Sep 2013 08:15:08 +0200 (CEST)", "msg_from": "Roberto Grandi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "Roberto Grandi <[email protected]> wrote:\n\n> we are using postgres 8.3.\n\n> my scenario is:\n>\n> - Delete all products record for a vendor\n> - Reload all products record (from new listing) for the same\n>   vendor.\n>\n> Obviously we repeat this process continously and table space is\n> growing really fast.\n>\n> Can you suggest me an approach for autovacuum within this\n> scenario and, if you want, suggest me an appropriate version of\n> postgres that help solving my problem?\n\nAt this point I would recommend the latest minor release of 9.2 for\nproduction use.  If you were early in a development cycle I would\nsuggest considering the soon-to-be-released 9.3.0.  Be sure to stay\ncurrent on minor releases.\n\nhttp://www.postgresql.org/support/versioning/\n\nIf your table space is growing fast with this usage pattern, it\nsuggests that autovacuum is not configured to be aggressive enough.\nMy suggestions:\n\nMake sure autovacuum is on.\n\nDecrease autovacuum_naptime to 15s, so that it will notice deletes\nsooner.\n\nYou could consider reducing autovacuum_scale_factor below the\ndefault of 0.2 so that it triggers based on fewer deletes.\n\nYou should probably set autovacuum_vacuum_cost_limit to 400 and\nincrementally increase it until autovacuum is able to keep up with\nthe activity you describe.  It defaults to 200 and I have had to\nset it to 650 on some systems to allow it to keep up.  It wouldn't\nbe surprising if some systems need a higher setting.  Higher\nsettings may cause autovacuum activity to have a more noticeable\nimpact on foreground processes; but if it is too low, you will\ndevelop bloat which will harm performance and eat disk space.\n\nIf all autovacuum workers are sometimes busy with big tables for\nextended periods and you see other tables neglected for too long,\nyou should boost autovacuum_max_workers until that problem is\nsolved.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Sep 2013 01:10:41 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "On Tue, Sep 3, 2013 at 11:15 PM, Roberto Grandi\n<[email protected]> wrote:\n> Hi kevin\n>\n> first of all thanks for your help. I did a mistake we are using postgres 8.3.\n>\n> I didn't expect COPY TO frees space but I was wondering Autovacumm delete dead rows as soon as possible, in fact my scenario is:\n>\n> - Delete all products record for a vendor\n> - Reload all products record (from new listing) for the same vendor.\n>\n> Obviously we repeat this process continously and table space is growing really fast.\n\nIt isn't obvious to me why you would do this continuously. Surely\nyour vendors don't change their catalogs exactly as fast as your\ndatabase can digest them!\n\nIn any event, I'd probably just incorporate a manual vacuum statement\ninto the delete/reload cycle. Since delete and copy are not\nthrottled, while autovacuum is throttled by default to a rather low\nlevel, it is quite possible that default autovacuum can never keep up\nwith the workload you are generating. Rather than trying to tune\nautovacuum to fit this special case, it would be easier to just throw\nin some manual vacuuming. (Not instead of autovac, just as a\nsupplement to it)\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Sep 2013 09:29:13 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "Hi Jeff,\n\nthe proble is that when continously updloading vendors listing on our \"big\" table the autovacuum is not able to free space as we would.\nSecondarly, if we launch a Vacuum after each \"upload\" we collide with other upload taht are running in parallel.\n\nIs it possible, form your point of view, working with isolation levels or table partitioning to minimize table space growing?\nThanks again for all your help.\n\nBR,\nRoberto\n\n----- Messaggio originale -----\nDa: \"Jeff Janes\" <[email protected]>\nA: \"Roberto Grandi\" <[email protected]>\nCc: \"Kevin Grittner\" <[email protected]>, [email protected]\nInviato: Mercoledì, 4 settembre 2013 18:29:13\nOggetto: Re: [PERFORM] COPY TO and VACUUM\n\nOn Tue, Sep 3, 2013 at 11:15 PM, Roberto Grandi\n<[email protected]> wrote:\n> Hi kevin\n>\n> first of all thanks for your help. I did a mistake we are using postgres 8.3.\n>\n> I didn't expect COPY TO frees space but I was wondering Autovacumm delete dead rows as soon as possible, in fact my scenario is:\n>\n> - Delete all products record for a vendor\n> - Reload all products record (from new listing) for the same vendor.\n>\n> Obviously we repeat this process continously and table space is growing really fast.\n\nIt isn't obvious to me why you would do this continuously. Surely\nyour vendors don't change their catalogs exactly as fast as your\ndatabase can digest them!\n\nIn any event, I'd probably just incorporate a manual vacuum statement\ninto the delete/reload cycle. Since delete and copy are not\nthrottled, while autovacuum is throttled by default to a rather low\nlevel, it is quite possible that default autovacuum can never keep up\nwith the workload you are generating. Rather than trying to tune\nautovacuum to fit this special case, it would be easier to just throw\nin some manual vacuuming. (Not instead of autovac, just as a\nsupplement to it)\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Sep 2013 18:05:08 +0200 (CEST)", "msg_from": "Roberto Grandi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "Hi Roberto,\n\nYes you could partition by vendor and then truncate the partition before loading.\n\nTruncate reclaims space immediately and is generally much faster than delete.\n\n\nOn Thu, Sep 05, 2013 at 06:05:08PM +0200, Roberto Grandi wrote:\n- Hi Jeff,\n- \n- the proble is that when continously updloading vendors listing on our \"big\" table the autovacuum is not able to free space as we would.\n- Secondarly, if we launch a Vacuum after each \"upload\" we collide with other upload taht are running in parallel.\n- \n- Is it possible, form your point of view, working with isolation levels or table partitioning to minimize table space growing?\n- Thanks again for all your help.\n- \n- BR,\n- Roberto\n- \n- ----- Messaggio originale -----\n- Da: \"Jeff Janes\" <[email protected]>\n- A: \"Roberto Grandi\" <[email protected]>\n- Cc: \"Kevin Grittner\" <[email protected]>, [email protected]\n- Inviato: Mercoledì, 4 settembre 2013 18:29:13\n- Oggetto: Re: [PERFORM] COPY TO and VACUUM\n- \n- On Tue, Sep 3, 2013 at 11:15 PM, Roberto Grandi\n- <[email protected]> wrote:\n- > Hi kevin\n- >\n- > first of all thanks for your help. I did a mistake we are using postgres 8.3.\n- >\n- > I didn't expect COPY TO frees space but I was wondering Autovacumm delete dead rows as soon as possible, in fact my scenario is:\n- >\n- > - Delete all products record for a vendor\n- > - Reload all products record (from new listing) for the same vendor.\n- >\n- > Obviously we repeat this process continously and table space is growing really fast.\n- \n- It isn't obvious to me why you would do this continuously. Surely\n- your vendors don't change their catalogs exactly as fast as your\n- database can digest them!\n- \n- In any event, I'd probably just incorporate a manual vacuum statement\n- into the delete/reload cycle. Since delete and copy are not\n- throttled, while autovacuum is throttled by default to a rather low\n- level, it is quite possible that default autovacuum can never keep up\n- with the workload you are generating. Rather than trying to tune\n- autovacuum to fit this special case, it would be easier to just throw\n- in some manual vacuuming. (Not instead of autovac, just as a\n- supplement to it)\n- \n- Cheers,\n- \n- Jeff\n- \n- \n- -- \n- Sent via pgsql-performance mailing list ([email protected])\n- To make changes to your subscription:\n- http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Sep 2013 09:24:17 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "On Thu, Sep 5, 2013 at 9:05 AM, Roberto Grandi\n<[email protected]> wrote:\n> Hi Jeff,\n>\n> the proble is that when continously updloading vendors listing on our \"big\" table the autovacuum is not able to free space as we would.\n\nIt might not be able to free it (to be reused) as fast as you need it\nto, but it should be freeing it eventually.\n\n> Secondarly, if we launch a Vacuum after each \"upload\" we collide with other upload taht are running in parallel.\n\nI wouldn't do a manual vacuum after *each* upload. Doing one after\nevery Nth upload, where N is estimated to make up about 1/5 of the\ntable, should be good. You are probably IO limited, so you probably\ndon't gain much by running these uploads in parallel, I would try to\navoid that. But in any case, there shouldn't be a collision between\nmanual vacuum and a concurrent upload. There would be one between two\nmanual vacuums but you could code around that by explicitly locking\nthe table in the correct mode nowait or with a timeout, and skipping\nthe vacuum if it can't get the lock.\n\n>\n> Is it possible, form your point of view, working with isolation levels or table partitioning to minimize table space growing?\n\nPartitioning by vendor might work well for that purpose.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Sep 2013 11:14:26 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "Hi Guys,\n\nwe found a suitable solution for our process we run every 5-6 hours a CLUSTER stement for our big table: this \"lock\" activities but allow us to recover all available space.\n\nWhen testing this task we discover another issues and that's why I'm coming back to you for your experience:\n\nduting our process we run multiple simoultaneously \"COPY... FROM\" in order to load data into our table but a t the same time we run also \"COPY ... TO\" statement in parallel to export data for other clients.\n\nWe found that COPY .. TO queries sometimes are pending for more than 100 minutes and the destination file continues to be at 0 Kb. Can you advise me how to solve this issue?\nIs it here a best way to bulk download data avoiding any kind of block when running in parallel?\n\nMany thanks in advance\n\n\n----- Messaggio originale -----\nDa: \"Jeff Janes\" <[email protected]>\nA: \"Roberto Grandi\" <[email protected]>\nCc: \"Kevin Grittner\" <[email protected]>, [email protected]\nInviato: Giovedì, 5 settembre 2013 20:14:26\nOggetto: Re: [PERFORM] COPY TO and VACUUM\n\nOn Thu, Sep 5, 2013 at 9:05 AM, Roberto Grandi\n<[email protected]> wrote:\n> Hi Jeff,\n>\n> the proble is that when continously updloading vendors listing on our \"big\" table the autovacuum is not able to free space as we would.\n\nIt might not be able to free it (to be reused) as fast as you need it\nto, but it should be freeing it eventually.\n\n> Secondarly, if we launch a Vacuum after each \"upload\" we collide with other upload taht are running in parallel.\n\nI wouldn't do a manual vacuum after *each* upload. Doing one after\nevery Nth upload, where N is estimated to make up about 1/5 of the\ntable, should be good. You are probably IO limited, so you probably\ndon't gain much by running these uploads in parallel, I would try to\navoid that. But in any case, there shouldn't be a collision between\nmanual vacuum and a concurrent upload. There would be one between two\nmanual vacuums but you could code around that by explicitly locking\nthe table in the correct mode nowait or with a timeout, and skipping\nthe vacuum if it can't get the lock.\n\n>\n> Is it possible, form your point of view, working with isolation levels or table partitioning to minimize table space growing?\n\nPartitioning by vendor might work well for that purpose.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Sep 2013 08:14:11 +0200 (CEST)", "msg_from": "Roberto Grandi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "On Wed, Sep 11, 2013 at 11:14 PM, Roberto Grandi <\[email protected]> wrote:\n\n> Hi Guys,\n>\n> we found a suitable solution for our process we run every 5-6 hours a\n> CLUSTER stement for our big table: this \"lock\" activities but allow us to\n> recover all available space.\n>\n\n\nIf you can tolerate the locks, that is fine, but it just doesn't seem like\nthis should be necessary. A manual vacuum should get the job done with\nweaker locking. Did you try running a manual vacuum every 5-6 hours\ninstead (it would probably not reclaim the space, but would make it\navailable for reuse and so cap the steady-state size of the file, hopefully\nto about the same size as the max size under the CLUSTER regime)\n\n\n> When testing this task we discover another issues and that's why I'm\n> coming back to you for your experience:\n>\n> duting our process we run multiple simoultaneously \"COPY... FROM\" in order\n> to load data into our table but a t the same time we run also \"COPY ... TO\"\n> statement in parallel to export data for other clients.\n>\n> We found that COPY .. TO queries sometimes are pending for more than 100\n> minutes and the destination file continues to be at 0 Kb. Can you advise me\n> how to solve this issue?\n>\n\nAre your COPY ... FROM also blocking, just in a way you are not detecting\n(because there is no growing file to watch the size of)? What does pg_lock\nsay?\n\nCheers,\n\nJeff\n\nOn Wed, Sep 11, 2013 at 11:14 PM, Roberto Grandi <[email protected]> wrote:\nHi Guys,\n\nwe found a suitable solution for our process we run every 5-6 hours a CLUSTER stement for our big table: this \"lock\" activities but allow us to recover all available space.\nIf you can tolerate the locks, that is fine, but it just doesn't seem like this should be necessary.  A manual vacuum should get the job done with weaker locking.  Did you try running a manual vacuum every 5-6 hours instead (it would probably not reclaim the space, but would make it available for reuse and so cap the steady-state size of the file, hopefully to about the same size as the max size under the CLUSTER regime)\n\nWhen testing this task we discover another issues and that's why I'm coming back to you for your experience:\n\nduting our process we run multiple simoultaneously \"COPY... FROM\" in order to load data into our table but a t the same time we run also \"COPY ... TO\" statement in parallel to export data for other clients.\n\nWe found that COPY .. TO queries sometimes are pending for more than 100 minutes and the destination file continues to be at 0 Kb. Can you advise me how to solve this issue?Are your COPY ... FROM also blocking, just in a way you are not detecting (because there is no growing file to watch the size of)?  What does pg_lock say?\nCheers,Jeff", "msg_date": "Sun, 15 Sep 2013 17:18:44 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY TO and VACUUM" }, { "msg_contents": "\nHi Jeff,\n\nthanks for your suggestion, Well test Vacuum instead of Cluster and come back with live result.\nat the same time i discovered that our COPY (...) TO are really really slow, I see 0Kb a t the beginning but at the end they grow by 4Kb each second.\nOur export is standard (i.e.: SELECT a, b, c FROM table1) but sometime it's very slow, what could be your suggestion? Is it possible to detect if we are facing problem on IO or Linux systemItself?\n\nMany thank in advance for all your help.\nRegards,\n\nRoberto\n\n\n----- Messaggio originale -----\nDa: \"Jeff Janes\" <[email protected]>\nA: \"Roberto Grandi\" <[email protected]>\nCc: [email protected], \"Kevin Grittner\" <[email protected]>\nInviato: Lunedì, 16 settembre 2013 2:18:44\nOggetto: Re: [PERFORM] COPY TO and VACUUM\n\nOn Wed, Sep 11, 2013 at 11:14 PM, Roberto Grandi <\[email protected]> wrote:\n\n> Hi Guys,\n>\n> we found a suitable solution for our process we run every 5-6 hours a\n> CLUSTER stement for our big table: this \"lock\" activities but allow us to\n> recover all available space.\n>\n\n\nIf you can tolerate the locks, that is fine, but it just doesn't seem like\nthis should be necessary. A manual vacuum should get the job done with\nweaker locking. Did you try running a manual vacuum every 5-6 hours\ninstead (it would probably not reclaim the space, but would make it\navailable for reuse and so cap the steady-state size of the file, hopefully\nto about the same size as the max size under the CLUSTER regime)\n\n\n> When testing this task we discover another issues and that's why I'm\n> coming back to you for your experience:\n>\n> duting our process we run multiple simoultaneously \"COPY... FROM\" in order\n> to load data into our table but a t the same time we run also \"COPY ... TO\"\n> statement in parallel to export data for other clients.\n>\n> We found that COPY .. TO queries sometimes are pending for more than 100\n> minutes and the destination file continues to be at 0 Kb. Can you advise me\n> how to solve this issue?\n>\n\nAre your COPY ... FROM also blocking, just in a way you are not detecting\n(because there is no growing file to watch the size of)? What does pg_lock\nsay?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 16 Sep 2013 17:14:09 +0200 (CEST)", "msg_from": "Roberto Grandi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY TO and VACUUM" } ]
[ { "msg_contents": "This is postgres 9.1.9.\n\nI'm getting a very weird case in which a simple range query over a PK\npicks the wrong... the very very wrong index.\n\nThe interesting thing, is that I've got no idea why PG is so grossly\nmis-estimating the number of rows scanned by the wrong plan.\n\nI've got this table that's a bunch of counters grouped by a bunch of\nother columns, and a date.\n\nThe PK is a simple serial type, and the unique index you'll see is\nover (created, expr1, expr2, ... expr2) where created is the date, and\nexprN are expressions involving the grouping columns.\n\nSo, I've got this query with this very wrong plan:\n\nexplain SELECT min(created) < ((date_trunc('day',now()) - '90\ndays'::interval)) FROM \"aggregated_tracks_daily_full\" WHERE id BETWEEN\n34979048 AND 35179048\n;\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=795.24..795.26 rows=1 width=0)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..795.24 rows=1 width=8)\n -> Index Scan using ix_aggregated_tracks_daily_full_unq on\naggregated_tracks_daily_full (cost=0.00..57875465.87 rows=72777\nwidth=8)\n Index Cond: (created IS NOT NULL)\n Filter: ((id >= 34979048) AND (id <= 35179048))\n(6 rows)\n\n\nThat plan will scan the entire table, because there is NO row with\ncreated null. I've got no idea why PG is choosing to scan over the\nunique index, given that 1) there's 0 rows outside the index\ncondition, so it'll scan the entire table, and 2) i've analyzed and\nvacuum analyzed repeatedly, no chage, and 3) there's the PK index that\nworks flawless.\n\nThe table is very big. So scanning it entriely in random fashion isn't smart.\n\nI can force the right plan like this:\n\nmat=# explain SELECT min(created) < ((date_trunc('day',now()) - '90\ndays'::interval)) FROM (select id,created FROM\n\"aggregated_tracks_daily_full\" WHERE id BETWEEN 34979048 AND 35179048\nORDER BY id) t;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=89416.49..89416.51 rows=1 width=8)\n -> Index Scan using aggregated_tracks_daily_full_pkey on\naggregated_tracks_daily_full (cost=0.00..88506.78 rows=72777\nwidth=16)\n Index Cond: ((id >= 34979048) AND (id <= 35179048))\n(3 rows)\n\n\nBut i'm wondering why I have to. There's something hinky there.\n\nPS: The point of the query is to know whether there's something to be\n\"archived\" (ie, too old) in that id range.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Sep 2013 18:47:11 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Weird case of wrong index choice" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n> So, I've got this query with this very wrong plan:\n\n> explain SELECT min(created) < ((date_trunc('day',now()) - '90\n> days'::interval)) FROM \"aggregated_tracks_daily_full\" WHERE id BETWEEN\n> 34979048 AND 35179048\n> ;\n\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=795.24..795.26 rows=1 width=0)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..795.24 rows=1 width=8)\n> -> Index Scan using ix_aggregated_tracks_daily_full_unq on\n> aggregated_tracks_daily_full (cost=0.00..57875465.87 rows=72777\n> width=8)\n> Index Cond: (created IS NOT NULL)\n> Filter: ((id >= 34979048) AND (id <= 35179048))\n> (6 rows)\n\n> That plan will scan the entire table, because there is NO row with\n> created null.\n\nNo, it won't, because of the LIMIT. What it will do is scan until it\nfinds a row satisfying the \"filter\" condition. It's possible that such\nrows only exist towards the high end of the \"created\" range, but the\nplanner is supposing that they're reasonably uniformly distributed.\n\n> I've got no idea why PG is choosing to scan over the\n> unique index,\n\nIt's trying to optimize the MIN(). The other plan you show will require\nscanning some thousands of rows, and so is certain to take a lot of time.\nThis plan is better except in pathological cases, which unfortunately\nyou seem to have one of.\n\nIf you need this type of query to be reliably fast, you might consider\ncreating an index on (created, id), which would allow the correct answer\nto be found with basically a single index probe.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 03 Sep 2013 19:11:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird case of wrong index choice" }, { "msg_contents": "On Tue, Sep 3, 2013 at 8:11 PM, Tom Lane <[email protected]> wrote:\n> Claudio Freire <[email protected]> writes:\n>> So, I've got this query with this very wrong plan:\n>\n>> explain SELECT min(created) < ((date_trunc('day',now()) - '90\n>> days'::interval)) FROM \"aggregated_tracks_daily_full\" WHERE id BETWEEN\n>> 34979048 AND 35179048\n>> ;\n>\n>> QUERY PLAN\n>> -------------------------------------------------------------------------------------------------------------------------------------------------\n>> Result (cost=795.24..795.26 rows=1 width=0)\n>> InitPlan 1 (returns $0)\n>> -> Limit (cost=0.00..795.24 rows=1 width=8)\n>> -> Index Scan using ix_aggregated_tracks_daily_full_unq on\n>> aggregated_tracks_daily_full (cost=0.00..57875465.87 rows=72777\n>> width=8)\n>> Index Cond: (created IS NOT NULL)\n>> Filter: ((id >= 34979048) AND (id <= 35179048))\n>> (6 rows)\n>\n>> That plan will scan the entire table, because there is NO row with\n>> created null.\n>\n> No, it won't, because of the LIMIT. What it will do is scan until it\n> finds a row satisfying the \"filter\" condition. It's possible that such\n> rows only exist towards the high end of the \"created\" range, but the\n> planner is supposing that they're reasonably uniformly distributed.\n\nWell, as usual with serials and timestamps, there's very strong\ncorrelation between created and id, so yes, they're all on the high\nend. But they're not 100% correlated either, there's a lot of mixing\nthere because there's a few dozen processes dumping info there.\n\nThough I'm starting to see the pathological part of this situation:\ncreated has no lower bound on the query, but the correlation and the\nfact that I picked the id range to exclude already-archived rows (old,\nearlier created dates), means that it'll waste a lot of time skipping\nthem.\n\n>> I've got no idea why PG is choosing to scan over the\n>> unique index,\n>\n> It's trying to optimize the MIN(). The other plan you show will require\n> scanning some thousands of rows, and so is certain to take a lot of time.\n> This plan is better except in pathological cases, which unfortunately\n> you seem to have one of.\n\nActually, a lot here is under a second, and I'll be forced to scan all\nthose rows afterwards anyway to \"archive\" them, so I'm not worried\nabout that. It's the figuring out whether it's necessary or not\nwithout scanning already-archived rows.\n\n> If you need this type of query to be reliably fast, you might consider\n> creating an index on (created, id), which would allow the correct answer\n> to be found with basically a single index probe.\n\nI'll try, though I was trying to make do with the indices I already\nhave rather than add more, the PK should do (because it's a huge table\nwith tons of updates, so I gotta keep it lean).\n\nI don't see how such an index would help either... since the ids are\nsecond level and can't be range-queried... unless it's because then\nanalyze will see the correlation?\n\nPerhaps I should just add an \"archived\" bit and a partial index on\ncreated where not archived. I was just puzzled by the planner's bad\nchoice, but I see it's the correlation hurting here, and that's a hard\nproblem lots of hackers are attacking anyway.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Sep 2013 20:49:31 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird case of wrong index choice" } ]
[ { "msg_contents": "-- \nMilos Babic\nhttp://www.linkedin.com/in/milosbabic\nTwitter: @milosbabic\nSkype: milos.babic\n\n-- Milos Babichttp://www.linkedin.com/in/milosbabicTwitter: @milosbabicSkype: milos.babic", "msg_date": "Wed, 4 Sep 2013 00:16:50 +0200", "msg_from": "Milos Babic <[email protected]>", "msg_from_op": true, "msg_subject": "apply" } ]
[ { "msg_contents": "I am tasked with getting specs for a postgres database server for the \ncore purpose of running moodle at our university.\nThe main question is at the moment is 12core AMD or 6/8core (E Series) \nINTEL.\n\nWhat would be the most in portend metric in planning an enterprise level \nserver for moodle.\n\n-- \nJohan Loubser\n(021) 8084036\nInformasie Tegnologie\nUniversity of Stellenbosch\n\nE-pos vrywaringsklousule\n\nHierdie e-pos mag vertroulike inligting bevat en mag regtens geprivilegeerd wees en is slegs bedoel vir die persoon aan wie dit geadresseer is. Indien u nie die bedoelde ontvanger is nie, word u hiermee in kennis gestel dat u hierdie dokument geensins mag gebruik, versprei of kopieer nie. Stel ook asseblief die sender onmiddellik per telefoon in kennis en vee die e-pos uit. Die Universiteit aanvaar nie aanspreeklikheid vir enige skade, verlies of uitgawe wat voortspruit uit hierdie e-pos en/of die oopmaak van enige lês aangeheg by hierdie e-pos nie.\n\nE-mail disclaimer\n\nThis e-mail may contain confidential information and may be legally privileged and is intended only for the person to whom it is addressed. If you are not the intended recipient, you are notified that you may not use, distribute or copy this document in any manner whatsoever. Kindly also notify the sender immediately by telephone, and delete the e-mail. The University does not accept liability for any damage, loss or expense arising from this e-mail and/or accessing any files attached to this e-mail.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 04 Sep 2013 11:01:07 +0200", "msg_from": "Johan Loubser <[email protected]>", "msg_from_op": true, "msg_subject": "AMD vs Intel" }, { "msg_contents": "Johan Loubser wrote:\r\n> I am tasked with getting specs for a postgres database server for the\r\n> core purpose of running moodle at our university.\r\n> The main question is at the moment is 12core AMD or 6/8core (E Series)\r\n> INTEL.\r\n> \r\n> What would be the most in portend metric in planning an enterprise level\r\n> server for moodle.\r\n\r\nI know too little about hardware to give an answer, but there are a few\r\nthings to consider:\r\n\r\n- The faster each individual core is, the faster an individual\r\n query will be processed (in-memory sorting, hash tables etc.).\r\n\r\n- More cores will help concurrency, but maybe not a lot: often it is\r\n I/O bandwidth that is the limiting factor for concurrency.\r\n\r\nI think the best thing would be to estimate the amount of data,\r\nconcurrent users and operations per second you expect.\r\n\r\nThere is the excellent book \"PostgreSQL High Performance\" by Greg Smith\r\nthat I would recommend to read.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Sep 2013 10:41:00 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD vs Intel" }, { "msg_contents": "On 9/4/2013 3:01 AM, Johan Loubser wrote:\n> I am tasked with getting specs for a postgres database server for the \n> core purpose of running moodle at our university.\n> The main question is at the moment is 12core AMD or 6/8core (E Series) \n> INTEL.\n>\n> What would be the most in portend metric in planning an enterprise \n> level server for moodle.\n>\n\nThe only way to be sure is to test your workload on the two different \nmachines.\n\nThat said, we moved from AMD to Intel E series Xeon a year ago. We were \nusing 8-core AMD devices not 12, and switched to 6-core Intel. The Intel \ndevices were more attractive due to their (much) higher single-core \nperformance. Another factor was a history of hardware bugs in the \nsupport chips used on AMD systems that we haven't generally seen with \nIntel. So all other things being equal, Intel brings less hassle, in my \nexperience.\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 04 Sep 2013 07:14:53 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD vs Intel" } ]
[ { "msg_contents": "Hello, in order to improve the performance of my database included in \nthe solution SAN SSD disk on RAID 10, but I see that the performance of \nthe database is the same. whichpostgresql.conf parameters that I do \nrecommend to change for use writing more.\n\nThank you very much.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 04 Sep 2013 10:22:46 -0500", "msg_from": "Jeison Bedoya Delgado <[email protected]>", "msg_from_op": true, "msg_subject": "higth performance write to disk" }, { "msg_contents": "On Wed, Sep 4, 2013 at 10:22 AM, Jeison Bedoya Delgado\n<[email protected]> wrote:\n> Hello, in order to improve the performance of my database included in the\n> solution SAN SSD disk on RAID 10, but I see that the performance of the\n> database is the same. whichpostgresql.conf parameters that I do recommend to\n> change for use writing more.\n\nGoing to need some more detailed information. In particular,\ntransaction rates, hardware specifics, database version, definition of\n'slow', etc etc etc.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Sep 2013 10:27:54 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: higth performance write to disk" }, { "msg_contents": "Hi merlin,\nThanks for your interest, I'm using version 9.2.2, I have a machine with \n128GB RAM, 32 cores, and my BD weighs 400GB. When I say slow I meant \nthat while consultations take or failing a copy with pgdump, this taking \nthe same time before had 10K disks in raid 10 and now that I have SSDs \nin Raid 10.\n\nThat behavior is normal, or you can improve writing and reading.\n\nthank you very much\nEl 04/09/2013 10:27 a.m., Merlin Moncure escribi�:\n> On Wed, Sep 4, 2013 at 10:22 AM, Jeison Bedoya Delgado\n> <[email protected]> wrote:\n>> Hello, in order to improve the performance of my database included in the\n>> solution SAN SSD disk on RAID 10, but I see that the performance of the\n>> database is the same. whichpostgresql.conf parameters that I do recommend to\n>> change for use writing more.\n> Going to need some more detailed information. In particular,\n> transaction rates, hardware specifics, database version, definition of\n> 'slow', etc etc etc.\n>\n> merlin\n>\n>\n\n-- \nAtentamente,\n\n\nJEISON BEDOYA DELGADO\nADM.Servidores y comunicaciones\nAUDIFARMA S.A.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 04 Sep 2013 13:52:00 -0500", "msg_from": "Jeison Bedoya Delgado <[email protected]>", "msg_from_op": true, "msg_subject": "Re: higth performance write to disk" }, { "msg_contents": "On 4.9.2013 20:52, Jeison Bedoya Delgado wrote:\n> Hi merlin, Thanks for your interest, I'm using version 9.2.2, I have\n> a machine with 128GB RAM, 32 cores, and my BD weighs 400GB. When I\n> say slow I meant that while consultations take or failing a copy with\n> pgdump, this taking the same time before had 10K disks in raid 10 and\n> now that I have SSDs in Raid 10.\n> \n> That behavior is normal, or you can improve writing and reading.\n\nSSDs are great random I/O, not that great for sequential I/O (better\nthan spinning drives, but you'll often run into other bottlenecks, for\nexample CPU).\n\nI'd bet this is what you're seeing. pg_dump is a heavily sequential\nworkload (read the whole table from start to end, write a huge dump to\nthe disk). A good RAID array with 10k SAS drives can give you very good\nperformance (I'd say ~500MB/s reads and writes for 6 drives in RAID10).\nI don't think the pg_dump will produce the data much faster.\n\nHave you done any tests (e.g. using fio) to test the performance of the\ntwo configurations? There might be some hw issue but if you have no\nbenchmarks it's difficult to judge.\n\nCan you run the fio tests now? The code is here:\n\n http://freecode.com/projects/fio\n\nand there are even a basic example:\n http://git.kernel.dk/?p=fio.git;a=blob_plain;f=examples/ssd-test.fio\n\n\nAnd how exactly are you running the pg_dump? And collect some basic\nstats next time it's running, for example a few samples from\n\n vmstat 5\n iostat -x -k 5\n\nand watch top how much CPU it's using.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 04 Sep 2013 21:19:43 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: higth performance write to disk" }, { "msg_contents": "On Wed, Sep 4, 2013 at 2:19 PM, Tomas Vondra <[email protected]> wrote:\n> On 4.9.2013 20:52, Jeison Bedoya Delgado wrote:\n>> Hi merlin, Thanks for your interest, I'm using version 9.2.2, I have\n>> a machine with 128GB RAM, 32 cores, and my BD weighs 400GB. When I\n>> say slow I meant that while consultations take or failing a copy with\n>> pgdump, this taking the same time before had 10K disks in raid 10 and\n>> now that I have SSDs in Raid 10.\n>>\n>> That behavior is normal, or you can improve writing and reading.\n>\n> SSDs are great random I/O, not that great for sequential I/O (better\n> than spinning drives, but you'll often run into other bottlenecks, for\n> example CPU).\n>\n> I'd bet this is what you're seeing. pg_dump is a heavily sequential\n> workload (read the whole table from start to end, write a huge dump to\n> the disk). A good RAID array with 10k SAS drives can give you very good\n> performance (I'd say ~500MB/s reads and writes for 6 drives in RAID10).\n> I don't think the pg_dump will produce the data much faster.\n>\n> Have you done any tests (e.g. using fio) to test the performance of the\n> two configurations? There might be some hw issue but if you have no\n> benchmarks it's difficult to judge.\n>\n> Can you run the fio tests now? The code is here:\n>\n> http://freecode.com/projects/fio\n>\n> and there are even a basic example:\n> http://git.kernel.dk/?p=fio.git;a=blob_plain;f=examples/ssd-test.fio\n>\n>\n> And how exactly are you running the pg_dump? And collect some basic\n> stats next time it's running, for example a few samples from\n>\n> vmstat 5\n> iostat -x -k 5\n>\n> and watch top how much CPU it's using.\n\nyeah. Also, some basic stats would be nice. For example how much data\nis getting written out and how long is it taking? We need to\nestablish benchmark of 'slow'.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Sep 2013 14:39:34 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: higth performance write to disk" } ]
[ { "msg_contents": "Hello Friends,\n\nI want start contributing in Postgres in code level. I read some ppts and some tutorials in postgres manual.\n\nCan you please suggest me some links where I can learn:\n\n1. Transaction Isolation in Database\n\n2. Query procession and possible optimizations.\n\n3. How to improve query and its performance.\n\n4. MVCC in detail.\n\n\nRegards\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\n\n \n\n\nHello Friends,\n \nI want start contributing in Postgres in code level. I read some ppts and some tutorials in postgres manual.\n \nCan you please suggest me some links where I can learn:\n1.      \nTransaction Isolation in Database\n2.      \nQuery procession and possible optimizations.\n3.      \nHow to improve query and its performance.\n4.      \nMVCC in detail.\n \n \nRegards\nTarkeshwar", "msg_date": "Thu, 5 Sep 2013 05:13:19 +0000", "msg_from": "M Tarkeshwar Rao <[email protected]>", "msg_from_op": true, "msg_subject": "Can you please suggest me some links where I can learn:" }, { "msg_contents": "On 9/4/2013 10:13 PM, M Tarkeshwar Rao wrote:\n>\n> Can you please suggest me some links where I can learn:\n>\n> 1.Transaction Isolation in Database\n>\n\nthe documentation\n\n> 2.Query procession and possible optimizations.\n>\n\nthe source code. if there were any easy optimizations, I'm sure \nthey've already been taken.\n\n> 3.How to improve query and its performance.\n>\n\nsee #2 above.\n\n> 4.MVCC in detail.\n>\n\nsee answer to #1, and the source code if you want more detail.\n\n\n-- \njohn r pierce 37N 122W\nsomewhere on the middle of the left coast\n\n\n\n\n\n\n\nOn 9/4/2013 10:13 PM, M Tarkeshwar Rao\n wrote:\n\n\nCan\n you please suggest me some links where I can learn:\n1.      \n Transaction\n Isolation in Database\n\n\n the documentation\n\n\n\n2.      \n Query\n procession and possible optimizations.\n\n\n the source code.     if there were any easy optimizations, I'm sure\n they've already been taken.\n\n\n\n3.      \n How\n to improve query and its performance.\n\n\n see #2 above.\n\n\n\n4.      \n MVCC\n in detail.\n\n\n see answer to #1, and the source code if you want more detail.\n\n\n-- \njohn r pierce 37N 122W\nsomewhere on the middle of the left coast", "msg_date": "Wed, 04 Sep 2013 22:25:46 -0700", "msg_from": "John R Pierce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can you please suggest me some links where I can learn:" }, { "msg_contents": "Hi,\n\nFrom the pg_xlog folder, I found some files with interesting time stamps: older file names with newer timestamps, can you please advise why?\n\nSet 1: How come 0000000400000F490000008D is 10 minutes newer than 0000000400000F490000008E?\n-rw------- 1 111 115 16777216 Sep 4 15:28 0000000400000F490000008C\n-rw------- 1 111 115 16777216 Sep 4 15:27 0000000400000F490000008D <===\n-rw------- 1 111 115 16777216 Sep 4 15:17 0000000400000F490000008E <====\n-rw------- 1 111 115 16777216 Sep 4 15:26 0000000400000F490000008F\n-rw------- 1 111 115 16777216 Sep 4 15:27 0000000400000F4900000090\n\nSet 2: why files, 0000000400000F48000000FD, 0000000400000F48000000FE and 0000000400000F4900000000, are not reused?\n1) -rw------- 1 postgres postgres 16777216 Sep 4 23:07 0000000400000F48000000FA\n2) -rw------- 1 postgres postgres 16777216 Sep 4 23:08 0000000400000F48000000FB\n3) -rw------- 1 postgres postgres 16777216 Sep 4 23:09 0000000400000F48000000FC <===\n4) -rw------- 1 postgres postgres 16777216 Sep 4 14:47 0000000400000F48000000FD <====\n5) -rw------- 1 postgres postgres 16777216 Sep 4 14:46 0000000400000F48000000FE\n6) -rw------- 1 postgres postgres 16777216 Sep 4 14:46 0000000400000F4900000000\n\nregards\nHi,From the pg_xlog folder, I found some files with interesting time stamps: older file names with newer timestamps, can you please advise why?Set 1: How come 0000000400000F490000008D is 10 minutes newer than 0000000400000F490000008E?-rw------- 1 111 115 16777216 Sep  4 15:28 0000000400000F490000008C-rw------- 1 111 115 16777216 Sep  4 15:27 0000000400000F490000008D <===-rw------- 1 111 115 16777216 Sep  4 15:17 0000000400000F490000008E <====-rw------- 1 111 115 16777216 Sep  4 15:26 0000000400000F490000008F-rw------- 1 111 115 16777216 Sep  4 15:27 0000000400000F4900000090Set 2: why files,  0000000400000F48000000FD,  0000000400000F48000000FE and 0000000400000F4900000000, are not reused?1) -rw------- 1 postgres postgres 16777216 Sep  4 23:07 0000000400000F48000000FA2) -rw------- 1 postgres postgres 16777216 Sep  4 23:08 0000000400000F48000000FB3) -rw------- 1 postgres postgres 16777216 Sep  4 23:09 0000000400000F48000000FC  <===4) -rw------- 1 postgres postgres 16777216 Sep  4 14:47 0000000400000F48000000FD  <====5) -rw------- 1 postgres postgres 16777216 Sep  4 14:46 0000000400000F48000000FE6) -rw------- 1 postgres postgres 16777216 Sep  4 14:46 0000000400000F4900000000regards", "msg_date": "Thu, 5 Sep 2013 21:19:41 +0800", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Question About WAL filename and its time stamp " }, { "msg_contents": "M Tarkeshwar Rao <[email protected]> wrote:\n\n> I want start contributing in Postgres in code level.\n\nWelcome!  You should probably start with this page and its links:\n\nhttp://wiki.postgresql.org/wiki/Developer_FAQ\n\n> Can you please suggest me some links where I can learn:\n> 1.       Transaction Isolation in Database\n\nhttp://www.postgresql.org/docs/current/interactive/transaction-iso.html\n\n> 2.       Query procession and possible optimizations.\n\nhttp://www.postgresql.org/docs/current/interactive/overview.html\n\nhttp://www.pgcon.org/2011/schedule/attachments/188_Planner%20talk.pdf\n\nhttp://www.justin.tv/sfpug/b/419326732\n\n> 3.       How to improve query and its performance.\n\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\n> 4.       MVCC in detail.\n\nhttp://www.slideshare.net/profyclub_ru/mvcc-unmaskedbruce-momjian\n\n-- \nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Thu, 5 Sep 2013 08:39:32 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Can you please suggest me some links where I can learn:" }, { "msg_contents": "On Thu, Sep 5, 2013 at 10:39 AM, Kevin Grittner <[email protected]> wrote:\n> M Tarkeshwar Rao <[email protected]> wrote:\n>\n>> I want start contributing in Postgres in code level.\n>\n> Welcome! You should probably start with this page and its links:\n>\n> http://wiki.postgresql.org/wiki/Developer_FAQ\n>\n>> Can you please suggest me some links where I can learn:\n>> 1. Transaction Isolation in Database\n>\n> http://www.postgresql.org/docs/current/interactive/transaction-iso.html\n>\n>> 2. Query procession and possible optimizations.\n>\n> http://www.postgresql.org/docs/current/interactive/overview.html\n>\n> http://www.pgcon.org/2011/schedule/attachments/188_Planner%20talk.pdf\n>\n> http://www.justin.tv/sfpug/b/419326732\n>\n>> 3. How to improve query and its performance.\n>\n> http://wiki.postgresql.org/wiki/Performance_Optimization\n>\n>> 4. MVCC in detail.\n>\n> http://www.slideshare.net/profyclub_ru/mvcc-unmaskedbruce-momjian\n\nAnother high priority thing to check out is the README files in the\ncode (this is mentioned tangentially in the developer FAQ).\nPersonally (and I'm no expert) I find some parts of the code much\neasier to dive into without a lot of surrounding context than others\n-- it could take a lifetime to learn it all. My advise is to start\nsmall and pick a very specific topic and focus on that.\n\nmerlin\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Thu, 5 Sep 2013 17:42:19 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Can you please suggest me some links where I can learn:" }, { "msg_contents": "On 9/5/2013 3:42 PM, Merlin Moncure wrote:\n> My advise is to start\n> small and pick a very specific topic and focus on that.\n\nmy advise is to first familiarize yourself with the package from the \nuser perspective before even thinking of diving in and making any \nchanges. I say this, because the OP's questions seemed to suggest \nthey knew very little about PostgreSQL.\n\n\n\n-- \njohn r pierce 37N 122W\nsomewhere on the middle of the left coast\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Thu, 05 Sep 2013 15:53:38 -0700", "msg_from": "John R Pierce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Can you please suggest me some links where I can learn:" }, { "msg_contents": "Hi,\n\n\n(13/09/05 22:19), [email protected] wrote:\n> Hi,\n>\n> From the pg_xlog folder, I found some files with interesting time stamps: older file names with newer timestamps, can you please advise why?\n>\n> Set 1: How come 0000000400000F490000008D is 10 minutes newer than 0000000400000F490000008E?\n> -rw------- 1 111 115 16777216 Sep 4 15:28 0000000400000F490000008C\n> -rw------- 1 111 115 16777216 Sep 4 15:27 0000000400000F490000008D <===\n> -rw------- 1 111 115 16777216 Sep 4 15:17 0000000400000F490000008E <====\n> -rw------- 1 111 115 16777216 Sep 4 15:26 0000000400000F490000008F\n> -rw------- 1 111 115 16777216 Sep 4 15:27 0000000400000F4900000090\n\nWAL files will be recycled.\nFor example:\n\nsampledb=# select pg_xlogfile_name(pg_current_xlog_location());\n pg_xlogfile_name\n--------------------------\n 000000010000000000000004\n(1 row)\n\n-rw-------. 1 postgres postgres 16777216 Sep 6 10:55 \n000000010000000000000001\n-rw-------. 1 postgres postgres 16777216 Sep 6 10:56 \n000000010000000000000002\n-rw-------. 1 postgres postgres 16777216 Sep 6 10:57 \n000000010000000000000003\n-rw-------. 1 postgres postgres 16777216 Sep 6 10:58 \n000000010000000000000004 <--- current WAL\n\nAfter a few minutes,\n\nsampledb=# select pg_xlogfile_name(pg_current_xlog_location());\n pg_xlogfile_name\n--------------------------\n 000000010000000000000006\n(1 row)\n\n-rw-------. 1 postgres postgres 16777216 Sep 6 11:01 \n000000010000000000000004 <-- Time of the last write.\n-rw-------. 1 postgres postgres 16777216 Sep 6 11:02 \n000000010000000000000005\n-rw-------. 1 postgres postgres 16777216 Sep 6 11:02 \n000000010000000000000006 <-- current WAL\n-rw-------. 1 postgres postgres 16777216 Sep 6 10:55 \n000000010000000000000007 <-- old name is 000000010000000000000001\n-rw-------. 1 postgres postgres 16777216 Sep 6 10:56 \n000000010000000000000008 <-- old name is 000000010000000000000002\n-rw-------. 1 postgres postgres 16777216 Sep 6 10:57 \n000000010000000000000009 <-- old name is 000000010000000000000003\n\n\nTiming of recycling depends on the situation. If the time stamp of \ncurrent WAL file is the most recent compared with other WAL files, there \nis no contradiction.\n(I wonder that the time stamp of 0000000400000F490000008C is the most \nrecent.)\n\n\n\n> Set 2: why files, 0000000400000F48000000FD, 0000000400000F48000000FE and 0000000400000F4900000000, are not reused?\n> 1) -rw------- 1 postgres postgres 16777216 Sep 4 23:07 0000000400000F48000000FA\n> 2) -rw------- 1 postgres postgres 16777216 Sep 4 23:08 0000000400000F48000000FB\n> 3) -rw------- 1 postgres postgres 16777216 Sep 4 23:09 0000000400000F48000000FC <===\n> 4) -rw------- 1 postgres postgres 16777216 Sep 4 14:47 0000000400000F48000000FD <====\n> 5) -rw------- 1 postgres postgres 16777216 Sep 4 14:46 0000000400000F48000000FE\n> 6) -rw------- 1 postgres postgres 16777216 Sep 4 14:46 0000000400000F4900000000\n\nThis is the specification of WAL. This specification changes from 9.3.\n\n\nregards\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Fri, 06 Sep 2013 11:17:16 +0900", "msg_from": "Suzuki Hironobu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question About WAL filename and its time stamp" }, { "msg_contents": "On 05 September 2013 18:50 ascot.moss wrote:\n\n>From the pg_xlog folder, I found some files with interesting time stamps: older file names with newer timestamps, can you please advise why?\n\n>Set 1: How come 0000000400000F490000008D is 10 minutes newer than 0000000400000F490000008E?\n>-rw------- 1 111 115 16777216 Sep 4 15:28 0000000400000F490000008C\n>-rw------- 1 111 115 16777216 Sep 4 15:27 0000000400000F490000008D <===\n>-rw------- 1 111 115 16777216 Sep 4 15:17 0000000400000F490000008E <====\n>-rw------- 1 111 115 16777216 Sep 4 15:26 0000000400000F490000008F\n>-rw------- 1 111 115 16777216 Sep 4 15:27 0000000400000F4900000090\n\n>Set 2: why files, 0000000400000F48000000FD, 0000000400000F48000000FE and 0000000400000F4900000000, are not reused?\n>1) -rw------- 1 postgres postgres 16777216 Sep 4 23:07 0000000400000F48000000FA\n>2) -rw------- 1 postgres postgres 16777216 Sep 4 23:08 0000000400000F48000000FB\n>3) -rw------- 1 postgres postgres 16777216 Sep 4 23:09 0000000400000F48000000FC <===\n>4) -rw------- 1 postgres postgres 16777216 Sep 4 14:47 0000000400000F48000000FD <====\n>5) -rw------- 1 postgres postgres 16777216 Sep 4 14:46 0000000400000F48000000FE\n>6) -rw------- 1 postgres postgres 16777216 Sep 4 14:46 0000000400000F4900000000\n\nIn postgres every checkpoint end, it recycle or remove the old xlog files. During the recycle process it creates next set of xlog files\nWhich will be used later by database operations. The files FC, FD, FE and 00 are recycled files. Now the FC file is in use because of this\nReason the time stamp is different.\n\nRegards,\nHari babu.\n\n\n\n\n\n\n\n\n\n\n\nOn\n05 September 2013 18:50 ascot.moss wrote:\n \n>From the pg_xlog folder, I found some files with interesting time stamps: older file names with newer timestamps, can you please advise why?\n \n>Set 1: How come 0000000400000F490000008D is 10 minutes newer than 0000000400000F490000008E?\n>-rw------- 1 111 115 16777216 Sep  4 15:28 0000000400000F490000008C\n>-rw------- 1 111 115 16777216 Sep  4 15:27 0000000400000F490000008D <===\n>-rw------- 1 111 115 16777216 Sep  4 15:17 0000000400000F490000008E <====\n>-rw------- 1 111 115 16777216 Sep  4 15:26 0000000400000F490000008F\n>-rw------- 1 111 115 16777216 Sep  4 15:27 0000000400000F4900000090\n \n>Set 2: why files,  0000000400000F48000000FD,  0000000400000F48000000FE and 0000000400000F4900000000, are not reused?\n>1) -rw------- 1 postgres postgres 16777216 Sep  4 23:07 0000000400000F48000000FA\n>2) -rw------- 1 postgres postgres 16777216 Sep  4 23:08 0000000400000F48000000FB\n>3) -rw------- 1 postgres postgres 16777216 Sep  4 23:09 0000000400000F48000000FC  <===\n>4) -rw------- 1 postgres postgres 16777216 Sep  4 14:47 0000000400000F48000000FD  <====\n>5) -rw------- 1 postgres postgres 16777216 Sep  4 14:46 0000000400000F48000000FE\n>6) -rw------- 1 postgres postgres 16777216 Sep  4 14:46 0000000400000F4900000000\n \nIn postgres every checkpoint end, it recycle or remove the old xlog files. During the recycle process it creates next set of xlog files\nWhich will be used later by database operations. The files FC, FD, FE and 00 are recycled files. Now the FC file is in use because of this\nReason the time stamp is different.\n \nRegards,\nHari babu.", "msg_date": "Fri, 6 Sep 2013 05:22:28 +0000", "msg_from": "Haribabu kommi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question About WAL filename and its time stamp " } ]
[ { "msg_contents": "Hi All,\n\nI have a view, that when created with our create statement works \nwonderfully, a query on the view with a standard where clause that \nnarrows the result to a single row performs in under a single ms. \nHowever, when we export this view and re-import it (dump and restore of \nthe database, which happens often), the exported version of the view has \nbeen modified by Postgres to include various typecasting of some columns \nto text.\n\nAll columns that it typecasts to text are varchar(20), so there is \nnothing wrong in what it's doing there. However, with the view \ndefinition including the ::text casting, the query planner changes and \nit goes into a nested loop, taking a query from <1ms to over ten minutes.\n\n*NOTE: *When I execute the queries with and without ::text myself \noutside of the view, there is no issues. the query plan is correct and I \nget my result fast, it's only when putting the query in the view and \nthen executing it does it do the nested loop and take forever.\n\n\n\n----------- Query plan for view defined without ::text ---------------\nNested Loop (cost=17.440..272.590 rows=1 width=1810) (actual \ntime=0.515..0.527 rows=1 loops=1)\n -> Nested Loop (cost=17.440..265.480 rows=1 width=1275) (actual \ntime=0.471..0.482 rows=1 loops=1)\n -> Nested Loop (cost=17.440..265.190 rows=1 width=761) \n(actual time=0.460..0.471 rows=1 loops=1)\n -> Nested Loop (cost=17.440..258.910 rows=1 width=186) \n(actual time=0.437..0.447 rows=1 loops=1)\n -> Nested Loop (cost=17.440..258.590 rows=1 \nwidth=154) (actual time=0.417..0.425 rows=1 loops=1)\n -> Nested Loop (cost=17.440..252.240 rows=1 \nwidth=160) (actual time=0.388..0.395 rows=1 loops=1)\n Join Filter: \n((alpha_yankee.bravo_papa)::text = (six_zulu.kilo_uniform)::text)\n -> Nested Loop (cost=0.000..108.990 \nrows=7 width=137) (actual time=0.107..0.109 rows=1 loops=1)\n -> Nested Loop \n(cost=0.000..102.780 rows=10 width=124) (actual time=0.077..0.078 rows=1 \nloops=1)\n -> Index Scan using \njuliet_yankee on alpha_yankee (cost=0.000..18.240 rows=13 width=84) \n(actual time=0.043..0.044 rows=1 loops=1)\n Index Cond: \n((bravo_charlie)::text = 'tango'::text)\n -> Index Scan using \ncharlie_quebec on juliet_three (cost=0.000..6.490 rows=1 width=40) \n(actual time=0.029..0.029 rows=1 loops=1)\n Index Cond: \n((echo)::text = (alpha_yankee.kilo_yankee)::text)\n Filter: \n((four)::text = 'alpha_whiskey'::text)\n -> Index Scan using \ncharlie_yankee on romeo (cost=0.000..0.610 rows=1 width=33) (actual \ntime=0.027..0.027 rows=1 loops=1)\n Index Cond: ((echo)::text \n= (juliet_three.lima_victor)::text)\n Filter: \n((lima_bravo)::text = 'alpha_whiskey'::text)\n -> Bitmap Heap Scan on papa six_zulu \n(cost=17.440..20.450 rows=1 width=64) (actual time=0.268..0.276 rows=21 \nloops=1)\n Recheck Cond: \n(((charlie_victor)::text = (juliet_three.echo)::text) AND \n((bravo_india)::text = (alpha_yankee.juliet_whiskey)::text))\n -> BitmapAnd \n(cost=17.440..17.440 rows=1 width=0) (actual time=0.255..0.255 rows=0 \nloops=1)\n -> Bitmap Index Scan on \nalpha_foxtrot (cost=0.000..6.710 rows=28 width=0) (actual \ntime=0.048..0.048 rows=21 loops=1)\n Index Cond: \n((charlie_victor)::text = (juliet_three.echo)::text)\n -> Bitmap Index Scan on \ndelta (cost=0.000..10.440 rows=108 width=0) (actual time=0.202..0.202 \nrows=757 loops=1)\n Index Cond: \n((bravo_india)::text = (alpha_yankee.juliet_whiskey)::text)\n -> Index Scan using whiskey_india on \nalpha_lima (cost=0.000..6.340 rows=1 width=57) (actual \ntime=0.026..0.027 rows=1 loops=1)\n Index Cond: ((echo)::text = \n(six_zulu.bravo_india)::text)\n -> Index Scan using golf on whiskey_oscar \n(cost=0.000..0.310 rows=1 width=43) (actual time=0.017..0.018 rows=1 \nloops=1)\n Index Cond: ((echo)::text = \n(alpha_lima.whiskey_six)::text)\n -> Index Scan using bravo_foxtrot on mike_mike \nlima_charlie (cost=0.000..6.270 rows=1 width=617) (actual \ntime=0.020..0.020 rows=1 loops=1)\n Index Cond: ((echo)::text = \n(six_zulu.kilo_uniform)::text)\n -> Index Scan using charlie_papa on mike_oscar \n(cost=0.000..0.270 rows=1 width=530) (actual time=0.008..0.008 rows=1 \nloops=1)\n Index Cond: ((echo)::text = (lima_charlie.yankee)::text)\n -> Index Scan using juliet_victor on juliet_six six_seven \n(cost=0.000..7.080 rows=1 width=556) (actual time=0.033..0.034 rows=1 \nloops=1)\n Index Cond: ((echo)::text = 'tango'::text)\n Total runtime: 0.871 ms\n----------------------------------------------------------------------------------------\n\n----------- Query plan for view defined WITH ::text ---------------\nNested Loop (cost=176136.470..3143249.320 rows=1 width=1810) (actual \ntime=16357.306..815170.609 rows=1 loops=1)\n Join Filter: ((alpha_yankee.bravo_charlie)::text = \n(six_seven.echo)::text)\n -> Index Scan using juliet_victor on juliet_six six_seven \n(cost=0.000..7.080 rows=1 width=556) (actual time=0.035..0.038 rows=1 \nloops=1)\n Index Cond: ((echo)::text = 'tango'::text)\n -> Nested Loop (cost=176136.470..3143242.190 rows=2 width=1275) \n(actual time=13071.765..812874.705 rows=6815445 loops=1)\n -> Nested Loop (cost=176136.470..3143241.560 rows=2 \nwidth=1243) (actual time=13071.742..760766.802 rows=6815445 loops=1)\n -> Hash Join (cost=176136.470..3143228.850 rows=2 \nwidth=1249) (actual time=13071.694..632785.712 rows=6815445 loops=1)\n Hash Cond: ((six_zulu.kilo_uniform)::text = \n(lima_charlie.echo)::text)\n -> Merge Join (cost=174404.520..3141496.860 rows=2 \nwidth=160) (actual time=13023.713..619787.785 rows=6815445 loops=1)\n Merge Cond: ((juliet_three.echo)::text = \n(six_zulu.charlie)::text)\n Join Filter: \n(((alpha_yankee.juliet_whiskey)::text = (six_zulu.bravo_india)::text) \nAND ((alpha_yankee.bravo_papa)::text = (six_zulu.kilo_uniform)::text))\n -> Merge Join (cost=174391.960..1013040.600 \nrows=5084399 width=137) (actual time=13023.660..68777.622 rows=7805731 \nloops=1)\n Merge Cond: \n((alpha_yankee.kilo_yankee)::text = (juliet_three.echo)::text)\n -> Index Scan using five on \nalpha_yankee (cost=0.000..739352.050 rows=9249725 width=84) (actual \ntime=0.027..14936.587 rows=9204640 loops=1)\n -> Sort (cost=174391.750..175895.330 \nrows=601433 width=53) (actual time=13023.526..14415.639 rows=7952904 \nloops=1)\n Sort Key: juliet_three.echo\n Sort Method: quicksort Memory: \n139105kB\n -> Hash Join \n(cost=62105.650..116660.060 rows=601433 width=53) (actual \ntime=669.244..2278.990 rows=814472 loops=1)\n Hash Cond: \n((juliet_three.lima_victor)::text = (romeo.echo)::text)\n -> Seq Scan on \njuliet_three (cost=0.000..40391.860 rows=814822 width=40) (actual \ntime=0.009..677.160 rows=814473 loops=1)\n Filter: \n((four)::text = 'alpha_whiskey'::text)\n -> Hash \n(cost=57562.740..57562.740 rows=363433 width=33) (actual \ntime=668.812..668.812 rows=363736 loops=1)\n Buckets: 65536 \nBatches: 1 Memory Usage: 23832kB\n -> Seq Scan on \nromeo (cost=0.000..57562.740 rows=363433 width=33) (actual \ntime=0.012..489.104 rows=363736 loops=1)\n Filter: \n((lima_bravo)::text = 'alpha_whiskey'::text)\n -> Materialize (cost=0.000..1192114.040 \nrows=10236405 width=64) (actual time=0.030..72475.323 rows=140608673 \nloops=1)\n -> Index Scan using alpha_foxtrot on \npapa six_zulu (cost=0.000..1166523.030 rows=10236405 width=64) (actual \ntime=0.024..22466.849 rows=10176345 loops=1)\n -> Hash (cost=1568.500..1568.500 rows=13076 \nwidth=1131) (actual time=47.954..47.954 rows=13054 loops=1)\n Buckets: 2048 Batches: 1 Memory Usage: \n13551kB\n -> Hash Join (cost=19.950..1568.500 \nrows=13076 width=1131) (actual time=0.415..24.461 rows=13054 loops=1)\n Hash Cond: \n((lima_charlie.yankee)::text = (mike_oscar.echo)::text)\n -> Seq Scan on mike_mike lima_charlie \n(cost=0.000..1368.760 rows=13076 width=617) (actual time=0.006..5.948 \nrows=13054 loops=1)\n -> Hash (cost=18.310..18.310 rows=131 \nwidth=530) (actual time=0.397..0.397 rows=131 loops=1)\n Buckets: 1024 Batches: 1 \nMemory Usage: 73kB\n -> Seq Scan on mike_oscar \n(cost=0.000..18.310 rows=131 width=530) (actual time=0.007..0.221 \nrows=131 loops=1)\n -> Index Scan using whiskey_india on alpha_lima \n(cost=0.000..6.340 rows=1 width=57) (actual time=0.017..0.017 rows=1 \nloops=6815445)\n Index Cond: ((echo)::text = \n(six_zulu.bravo_india)::text)\n -> Index Scan using golf on whiskey_oscar (cost=0.000..0.310 \nrows=1 width=43) (actual time=0.006..0.006 rows=1 loops=6815445)\n Index Cond: ((echo)::text = (alpha_lima.whiskey_six)::text)\n Total runtime: 815589.464 ms\n----------------------------------------------------------------------------------------\n\nIf I set enable_nestloop = off, then it works perfectly, however I don't \nreally have the option to do this, I'd rather see what's causing it.\n\nAny thoughts?\n\nThanks,\n- Brian F\n\n\n\n\n\n\n\n\n\n Hi All,\n\n I have a view, that when created with our create statement works\n wonderfully, a query on the view with a standard where clause that\n narrows the result to a single row performs in under a single ms.\n However, when we export this view and re-import it (dump and restore\n of the database, which happens often), the exported version of the\n view has been modified by Postgres to include various typecasting of\n some columns to text.\n\n All columns that it typecasts to text are varchar(20), so there is\n nothing wrong in what it's doing there. However, with the view\n definition including the ::text casting, the query planner changes\n and it goes into a nested loop, taking a query from <1ms to over\n ten minutes. \n\nNOTE: When I execute the queries with and without ::text\n myself outside of the view, there is no issues. the query plan is\n correct and I get my result fast, it's only when putting the query\n in the view and then executing it does it do the nested loop and\n take forever.\n\n\n\n ----------- Query plan for view defined without ::text\n ---------------\n\nNested Loop  (cost=17.440..272.590 rows=1\n width=1810) (actual time=0.515..0.527 rows=1 loops=1)\n   ->  Nested Loop  (cost=17.440..265.480 rows=1 width=1275)\n (actual time=0.471..0.482 rows=1 loops=1)\n         ->  Nested Loop  (cost=17.440..265.190 rows=1\n width=761) (actual time=0.460..0.471 rows=1 loops=1)\n               ->  Nested Loop  (cost=17.440..258.910 rows=1\n width=186) (actual time=0.437..0.447 rows=1 loops=1)\n                     ->  Nested Loop  (cost=17.440..258.590\n rows=1 width=154) (actual time=0.417..0.425 rows=1 loops=1)\n                           ->  Nested Loop \n (cost=17.440..252.240 rows=1 width=160) (actual time=0.388..0.395\n rows=1 loops=1)\n                                   Join Filter:\n ((alpha_yankee.bravo_papa)::text = (six_zulu.kilo_uniform)::text)\n                                 ->  Nested Loop \n (cost=0.000..108.990 rows=7 width=137) (actual time=0.107..0.109\n rows=1 loops=1)\n                                       ->  Nested Loop \n (cost=0.000..102.780 rows=10 width=124) (actual time=0.077..0.078\n rows=1 loops=1)\n                                             ->  Index Scan\n using juliet_yankee on alpha_yankee  (cost=0.000..18.240 rows=13\n width=84) (actual time=0.043..0.044 rows=1 loops=1)\n                                                     Index Cond:\n ((bravo_charlie)::text = 'tango'::text)\n                                             ->  Index Scan\n using charlie_quebec on juliet_three  (cost=0.000..6.490 rows=1\n width=40) (actual time=0.029..0.029 rows=1 loops=1)\n                                                     Index Cond:\n ((echo)::text = (alpha_yankee.kilo_yankee)::text)\n                                                     Filter:\n ((four)::text = 'alpha_whiskey'::text)\n                                       ->  Index Scan using\n charlie_yankee on romeo  (cost=0.000..0.610 rows=1 width=33)\n (actual time=0.027..0.027 rows=1 loops=1)\n                                               Index Cond:\n ((echo)::text = (juliet_three.lima_victor)::text)\n                                               Filter:\n ((lima_bravo)::text = 'alpha_whiskey'::text)\n                                 ->  Bitmap Heap Scan on papa\n six_zulu  (cost=17.440..20.450 rows=1 width=64) (actual\n time=0.268..0.276 rows=21 loops=1)\n                                         Recheck Cond:\n (((charlie_victor)::text = (juliet_three.echo)::text) AND\n ((bravo_india)::text = (alpha_yankee.juliet_whiskey)::text))\n                                       ->  BitmapAnd \n (cost=17.440..17.440 rows=1 width=0) (actual time=0.255..0.255\n rows=0 loops=1)\n                                             ->  Bitmap Index\n Scan on alpha_foxtrot  (cost=0.000..6.710 rows=28 width=0) (actual\n time=0.048..0.048 rows=21 loops=1)\n                                                     Index Cond:\n ((charlie_victor)::text = (juliet_three.echo)::text)\n                                             ->  Bitmap Index\n Scan on delta  (cost=0.000..10.440 rows=108 width=0) (actual\n time=0.202..0.202 rows=757 loops=1)\n                                                     Index Cond:\n ((bravo_india)::text = (alpha_yankee.juliet_whiskey)::text)\n                           ->  Index Scan using whiskey_india on\n alpha_lima  (cost=0.000..6.340 rows=1 width=57) (actual\n time=0.026..0.027 rows=1 loops=1)\n                                   Index Cond: ((echo)::text =\n (six_zulu.bravo_india)::text)\n                     ->  Index Scan using golf on whiskey_oscar \n (cost=0.000..0.310 rows=1 width=43) (actual time=0.017..0.018\n rows=1 loops=1)\n                             Index Cond: ((echo)::text =\n (alpha_lima.whiskey_six)::text)\n               ->  Index Scan using bravo_foxtrot on mike_mike\n lima_charlie  (cost=0.000..6.270 rows=1 width=617) (actual\n time=0.020..0.020 rows=1 loops=1)\n                       Index Cond: ((echo)::text =\n (six_zulu.kilo_uniform)::text)\n         ->  Index Scan using charlie_papa on mike_oscar \n (cost=0.000..0.270 rows=1 width=530) (actual time=0.008..0.008\n rows=1 loops=1)\n                 Index Cond: ((echo)::text =\n (lima_charlie.yankee)::text)\n   ->  Index Scan using juliet_victor on juliet_six six_seven \n (cost=0.000..7.080 rows=1 width=556) (actual time=0.033..0.034\n rows=1 loops=1)\n           Index Cond: ((echo)::text = 'tango'::text)\n  Total runtime: 0.871 ms\n----------------------------------------------------------------------------------------\n\n ----------- Query plan for view defined WITH ::text ---------------\nNested Loop  (cost=176136.470..3143249.320\n rows=1 width=1810) (actual time=16357.306..815170.609 rows=1\n loops=1)\n     Join Filter: ((alpha_yankee.bravo_charlie)::text =\n (six_seven.echo)::text)\n   ->  Index Scan using juliet_victor on juliet_six six_seven \n (cost=0.000..7.080 rows=1 width=556) (actual time=0.035..0.038\n rows=1 loops=1)\n           Index Cond: ((echo)::text = 'tango'::text)\n   ->  Nested Loop  (cost=176136.470..3143242.190 rows=2\n width=1275) (actual time=13071.765..812874.705 rows=6815445\n loops=1)\n         ->  Nested Loop  (cost=176136.470..3143241.560 rows=2\n width=1243) (actual time=13071.742..760766.802 rows=6815445\n loops=1)\n               ->  Hash Join  (cost=176136.470..3143228.850\n rows=2 width=1249) (actual time=13071.694..632785.712 rows=6815445\n loops=1)\n                       Hash Cond: ((six_zulu.kilo_uniform)::text =\n (lima_charlie.echo)::text)\n                     ->  Merge Join \n (cost=174404.520..3141496.860 rows=2 width=160) (actual\n time=13023.713..619787.785 rows=6815445 loops=1)\n                             Merge Cond: ((juliet_three.echo)::text\n = (six_zulu.charlie)::text)\n                             Join Filter:\n (((alpha_yankee.juliet_whiskey)::text =\n (six_zulu.bravo_india)::text) AND ((alpha_yankee.bravo_papa)::text\n = (six_zulu.kilo_uniform)::text))\n                           ->  Merge Join \n (cost=174391.960..1013040.600 rows=5084399 width=137) (actual\n time=13023.660..68777.622 rows=7805731 loops=1)\n                                   Merge Cond:\n ((alpha_yankee.kilo_yankee)::text = (juliet_three.echo)::text)\n                                 ->  Index Scan using five on\n alpha_yankee  (cost=0.000..739352.050 rows=9249725 width=84)\n (actual time=0.027..14936.587 rows=9204640 loops=1)\n                                 ->  Sort \n (cost=174391.750..175895.330 rows=601433 width=53) (actual\n time=13023.526..14415.639 rows=7952904 loops=1)\n                                         Sort Key:\n juliet_three.echo\n                                         Sort Method: quicksort \n Memory: 139105kB\n                                       ->  Hash Join \n (cost=62105.650..116660.060 rows=601433 width=53) (actual\n time=669.244..2278.990 rows=814472 loops=1)\n                                               Hash Cond:\n ((juliet_three.lima_victor)::text = (romeo.echo)::text)\n                                             ->  Seq Scan on\n juliet_three  (cost=0.000..40391.860 rows=814822 width=40) (actual\n time=0.009..677.160 rows=814473 loops=1)\n                                                     Filter:\n ((four)::text = 'alpha_whiskey'::text)\n                                             ->  Hash \n (cost=57562.740..57562.740 rows=363433 width=33) (actual\n time=668.812..668.812 rows=363736 loops=1)\n                                                     Buckets:\n 65536  Batches: 1  Memory Usage: 23832kB\n                                                   ->  Seq Scan\n on romeo  (cost=0.000..57562.740 rows=363433 width=33) (actual\n time=0.012..489.104 rows=363736 loops=1)\n                                                           Filter:\n ((lima_bravo)::text = 'alpha_whiskey'::text)\n                           ->  Materialize \n (cost=0.000..1192114.040 rows=10236405 width=64) (actual\n time=0.030..72475.323 rows=140608673 loops=1)\n                                 ->  Index Scan using\n alpha_foxtrot on papa six_zulu  (cost=0.000..1166523.030\n rows=10236405 width=64) (actual time=0.024..22466.849\n rows=10176345 loops=1)\n                     ->  Hash  (cost=1568.500..1568.500\n rows=13076 width=1131) (actual time=47.954..47.954 rows=13054\n loops=1)\n                             Buckets: 2048  Batches: 1  Memory\n Usage: 13551kB\n                           ->  Hash Join  (cost=19.950..1568.500\n rows=13076 width=1131) (actual time=0.415..24.461 rows=13054\n loops=1)\n                                   Hash Cond:\n ((lima_charlie.yankee)::text = (mike_oscar.echo)::text)\n                                 ->  Seq Scan on mike_mike\n lima_charlie  (cost=0.000..1368.760 rows=13076 width=617) (actual\n time=0.006..5.948 rows=13054 loops=1)\n                                 ->  Hash  (cost=18.310..18.310\n rows=131 width=530) (actual time=0.397..0.397 rows=131 loops=1)\n                                         Buckets: 1024  Batches: 1 \n Memory Usage: 73kB\n                                       ->  Seq Scan on\n mike_oscar  (cost=0.000..18.310 rows=131 width=530) (actual\n time=0.007..0.221 rows=131 loops=1)\n               ->  Index Scan using whiskey_india on alpha_lima \n (cost=0.000..6.340 rows=1 width=57) (actual time=0.017..0.017\n rows=1 loops=6815445)\n                       Index Cond: ((echo)::text =\n (six_zulu.bravo_india)::text)\n         ->  Index Scan using golf on whiskey_oscar \n (cost=0.000..0.310 rows=1 width=43) (actual time=0.006..0.006\n rows=1 loops=6815445)\n                 Index Cond: ((echo)::text =\n (alpha_lima.whiskey_six)::text)\n  Total runtime: 815589.464 ms\n----------------------------------------------------------------------------------------\n\n If I set enable_nestloop = off, then it works perfectly, however I\n don't really have the option to do this, I'd rather see what's\n causing it.\n\n Any thoughts?\n\n Thanks,\n - Brian F", "msg_date": "Thu, 05 Sep 2013 16:45:16 -0600", "msg_from": "Brian Fehrle <[email protected]>", "msg_from_op": true, "msg_subject": "View with and without ::text casting performs differently." }, { "msg_contents": "Apologies, forgot to include Postgres version 9.1.9\n\n- Brian F\nOn 09/05/2013 04:45 PM, Brian Fehrle wrote:\n> Hi All,\n>\n> I have a view, that when created with our create statement works \n> wonderfully, a query on the view with a standard where clause that \n> narrows the result to a single row performs in under a single ms. \n> However, when we export this view and re-import it (dump and restore \n> of the database, which happens often), the exported version of the \n> view has been modified by Postgres to include various typecasting of \n> some columns to text.\n>\n> All columns that it typecasts to text are varchar(20), so there is \n> nothing wrong in what it's doing there. However, with the view \n> definition including the ::text casting, the query planner changes and \n> it goes into a nested loop, taking a query from <1ms to over ten minutes.\n>\n> *NOTE: *When I execute the queries with and without ::text myself \n> outside of the view, there is no issues. the query plan is correct and \n> I get my result fast, it's only when putting the query in the view and \n> then executing it does it do the nested loop and take forever.\n>\n>\n>\n> ----------- Query plan for view defined without ::text ---------------\n> Nested Loop (cost=17.440..272.590 rows=1 width=1810) (actual \n> time=0.515..0.527 rows=1 loops=1)\n> -> Nested Loop (cost=17.440..265.480 rows=1 width=1275) (actual \n> time=0.471..0.482 rows=1 loops=1)\n> -> Nested Loop (cost=17.440..265.190 rows=1 width=761) \n> (actual time=0.460..0.471 rows=1 loops=1)\n> -> Nested Loop (cost=17.440..258.910 rows=1 width=186) \n> (actual time=0.437..0.447 rows=1 loops=1)\n> -> Nested Loop (cost=17.440..258.590 rows=1 \n> width=154) (actual time=0.417..0.425 rows=1 loops=1)\n> -> Nested Loop (cost=17.440..252.240 rows=1 \n> width=160) (actual time=0.388..0.395 rows=1 loops=1)\n> Join Filter: \n> ((alpha_yankee.bravo_papa)::text = (six_zulu.kilo_uniform)::text)\n> -> Nested Loop (cost=0.000..108.990 \n> rows=7 width=137) (actual time=0.107..0.109 rows=1 loops=1)\n> -> Nested Loop \n> (cost=0.000..102.780 rows=10 width=124) (actual time=0.077..0.078 \n> rows=1 loops=1)\n> -> Index Scan using \n> juliet_yankee on alpha_yankee (cost=0.000..18.240 rows=13 width=84) \n> (actual time=0.043..0.044 rows=1 loops=1)\n> Index Cond: \n> ((bravo_charlie)::text = 'tango'::text)\n> -> Index Scan using \n> charlie_quebec on juliet_three (cost=0.000..6.490 rows=1 width=40) \n> (actual time=0.029..0.029 rows=1 loops=1)\n> Index Cond: \n> ((echo)::text = (alpha_yankee.kilo_yankee)::text)\n> Filter: \n> ((four)::text = 'alpha_whiskey'::text)\n> -> Index Scan using \n> charlie_yankee on romeo (cost=0.000..0.610 rows=1 width=33) (actual \n> time=0.027..0.027 rows=1 loops=1)\n> Index Cond: \n> ((echo)::text = (juliet_three.lima_victor)::text)\n> Filter: \n> ((lima_bravo)::text = 'alpha_whiskey'::text)\n> -> Bitmap Heap Scan on papa six_zulu \n> (cost=17.440..20.450 rows=1 width=64) (actual time=0.268..0.276 \n> rows=21 loops=1)\n> Recheck Cond: \n> (((charlie_victor)::text = (juliet_three.echo)::text) AND \n> ((bravo_india)::text = (alpha_yankee.juliet_whiskey)::text))\n> -> BitmapAnd \n> (cost=17.440..17.440 rows=1 width=0) (actual time=0.255..0.255 rows=0 \n> loops=1)\n> -> Bitmap Index Scan on \n> alpha_foxtrot (cost=0.000..6.710 rows=28 width=0) (actual \n> time=0.048..0.048 rows=21 loops=1)\n> Index Cond: \n> ((charlie_victor)::text = (juliet_three.echo)::text)\n> -> Bitmap Index Scan on \n> delta (cost=0.000..10.440 rows=108 width=0) (actual time=0.202..0.202 \n> rows=757 loops=1)\n> Index Cond: \n> ((bravo_india)::text = (alpha_yankee.juliet_whiskey)::text)\n> -> Index Scan using whiskey_india on \n> alpha_lima (cost=0.000..6.340 rows=1 width=57) (actual \n> time=0.026..0.027 rows=1 loops=1)\n> Index Cond: ((echo)::text = \n> (six_zulu.bravo_india)::text)\n> -> Index Scan using golf on whiskey_oscar \n> (cost=0.000..0.310 rows=1 width=43) (actual time=0.017..0.018 rows=1 \n> loops=1)\n> Index Cond: ((echo)::text = \n> (alpha_lima.whiskey_six)::text)\n> -> Index Scan using bravo_foxtrot on mike_mike \n> lima_charlie (cost=0.000..6.270 rows=1 width=617) (actual \n> time=0.020..0.020 rows=1 loops=1)\n> Index Cond: ((echo)::text = \n> (six_zulu.kilo_uniform)::text)\n> -> Index Scan using charlie_papa on mike_oscar \n> (cost=0.000..0.270 rows=1 width=530) (actual time=0.008..0.008 rows=1 \n> loops=1)\n> Index Cond: ((echo)::text = (lima_charlie.yankee)::text)\n> -> Index Scan using juliet_victor on juliet_six six_seven \n> (cost=0.000..7.080 rows=1 width=556) (actual time=0.033..0.034 rows=1 \n> loops=1)\n> Index Cond: ((echo)::text = 'tango'::text)\n> Total runtime: 0.871 ms\n> ----------------------------------------------------------------------------------------\n>\n> ----------- Query plan for view defined WITH ::text ---------------\n> Nested Loop (cost=176136.470..3143249.320 rows=1 width=1810) (actual \n> time=16357.306..815170.609 rows=1 loops=1)\n> Join Filter: ((alpha_yankee.bravo_charlie)::text = \n> (six_seven.echo)::text)\n> -> Index Scan using juliet_victor on juliet_six six_seven \n> (cost=0.000..7.080 rows=1 width=556) (actual time=0.035..0.038 rows=1 \n> loops=1)\n> Index Cond: ((echo)::text = 'tango'::text)\n> -> Nested Loop (cost=176136.470..3143242.190 rows=2 width=1275) \n> (actual time=13071.765..812874.705 rows=6815445 loops=1)\n> -> Nested Loop (cost=176136.470..3143241.560 rows=2 \n> width=1243) (actual time=13071.742..760766.802 rows=6815445 loops=1)\n> -> Hash Join (cost=176136.470..3143228.850 rows=2 \n> width=1249) (actual time=13071.694..632785.712 rows=6815445 loops=1)\n> Hash Cond: ((six_zulu.kilo_uniform)::text = \n> (lima_charlie.echo)::text)\n> -> Merge Join (cost=174404.520..3141496.860 \n> rows=2 width=160) (actual time=13023.713..619787.785 rows=6815445 loops=1)\n> Merge Cond: ((juliet_three.echo)::text = \n> (six_zulu.charlie)::text)\n> Join Filter: \n> (((alpha_yankee.juliet_whiskey)::text = (six_zulu.bravo_india)::text) \n> AND ((alpha_yankee.bravo_papa)::text = (six_zulu.kilo_uniform)::text))\n> -> Merge Join (cost=174391.960..1013040.600 \n> rows=5084399 width=137) (actual time=13023.660..68777.622 rows=7805731 \n> loops=1)\n> Merge Cond: \n> ((alpha_yankee.kilo_yankee)::text = (juliet_three.echo)::text)\n> -> Index Scan using five on \n> alpha_yankee (cost=0.000..739352.050 rows=9249725 width=84) (actual \n> time=0.027..14936.587 rows=9204640 loops=1)\n> -> Sort (cost=174391.750..175895.330 \n> rows=601433 width=53) (actual time=13023.526..14415.639 rows=7952904 \n> loops=1)\n> Sort Key: juliet_three.echo\n> Sort Method: quicksort Memory: \n> 139105kB\n> -> Hash Join \n> (cost=62105.650..116660.060 rows=601433 width=53) (actual \n> time=669.244..2278.990 rows=814472 loops=1)\n> Hash Cond: \n> ((juliet_three.lima_victor)::text = (romeo.echo)::text)\n> -> Seq Scan on \n> juliet_three (cost=0.000..40391.860 rows=814822 width=40) (actual \n> time=0.009..677.160 rows=814473 loops=1)\n> Filter: \n> ((four)::text = 'alpha_whiskey'::text)\n> -> Hash \n> (cost=57562.740..57562.740 rows=363433 width=33) (actual \n> time=668.812..668.812 rows=363736 loops=1)\n> Buckets: 65536 \n> Batches: 1 Memory Usage: 23832kB\n> -> Seq Scan on \n> romeo (cost=0.000..57562.740 rows=363433 width=33) (actual \n> time=0.012..489.104 rows=363736 loops=1)\n> Filter: ((lima_bravo)::text = 'alpha_whiskey'::text)\n> -> Materialize (cost=0.000..1192114.040 \n> rows=10236405 width=64) (actual time=0.030..72475.323 rows=140608673 \n> loops=1)\n> -> Index Scan using alpha_foxtrot on \n> papa six_zulu (cost=0.000..1166523.030 rows=10236405 width=64) \n> (actual time=0.024..22466.849 rows=10176345 loops=1)\n> -> Hash (cost=1568.500..1568.500 rows=13076 \n> width=1131) (actual time=47.954..47.954 rows=13054 loops=1)\n> Buckets: 2048 Batches: 1 Memory Usage: \n> 13551kB\n> -> Hash Join (cost=19.950..1568.500 \n> rows=13076 width=1131) (actual time=0.415..24.461 rows=13054 loops=1)\n> Hash Cond: \n> ((lima_charlie.yankee)::text = (mike_oscar.echo)::text)\n> -> Seq Scan on mike_mike \n> lima_charlie (cost=0.000..1368.760 rows=13076 width=617) (actual \n> time=0.006..5.948 rows=13054 loops=1)\n> -> Hash (cost=18.310..18.310 rows=131 \n> width=530) (actual time=0.397..0.397 rows=131 loops=1)\n> Buckets: 1024 Batches: 1 \n> Memory Usage: 73kB\n> -> Seq Scan on mike_oscar \n> (cost=0.000..18.310 rows=131 width=530) (actual time=0.007..0.221 \n> rows=131 loops=1)\n> -> Index Scan using whiskey_india on alpha_lima \n> (cost=0.000..6.340 rows=1 width=57) (actual time=0.017..0.017 rows=1 \n> loops=6815445)\n> Index Cond: ((echo)::text = \n> (six_zulu.bravo_india)::text)\n> -> Index Scan using golf on whiskey_oscar (cost=0.000..0.310 \n> rows=1 width=43) (actual time=0.006..0.006 rows=1 loops=6815445)\n> Index Cond: ((echo)::text = \n> (alpha_lima.whiskey_six)::text)\n> Total runtime: 815589.464 ms\n> ----------------------------------------------------------------------------------------\n>\n> If I set enable_nestloop = off, then it works perfectly, however I \n> don't really have the option to do this, I'd rather see what's causing it.\n>\n> Any thoughts?\n>\n> Thanks,\n> - Brian F\n>\n>\n>\n\n\n\n\n\n\n\nApologies, forgot to include Postgres\n version  9.1.9\n\n - Brian F\n On 09/05/2013 04:45 PM, Brian Fehrle wrote:\n\n\n\n Hi All,\n\n I have a view, that when created with our create statement works\n wonderfully, a query on the view with a standard where clause that\n narrows the result to a single row performs in under a single ms.\n However, when we export this view and re-import it (dump and\n restore of the database, which happens often), the exported\n version of the view has been modified by Postgres to include\n various typecasting of some columns to text.\n\n All columns that it typecasts to text are varchar(20), so there is\n nothing wrong in what it's doing there. However, with the view\n definition including the ::text casting, the query planner changes\n and it goes into a nested loop, taking a query from <1ms to\n over ten minutes. \n\nNOTE: When I execute the queries with and without ::text\n myself outside of the view, there is no issues. the query plan is\n correct and I get my result fast, it's only when putting the query\n in the view and then executing it does it do the nested loop and\n take forever.\n\n\n\n ----------- Query plan for view defined without ::text\n ---------------\n\nNested Loop  (cost=17.440..272.590 rows=1\n width=1810) (actual time=0.515..0.527 rows=1 loops=1)\n   ->  Nested Loop  (cost=17.440..265.480 rows=1 width=1275)\n (actual time=0.471..0.482 rows=1 loops=1)\n         ->  Nested Loop  (cost=17.440..265.190 rows=1\n width=761) (actual time=0.460..0.471 rows=1 loops=1)\n               ->  Nested Loop  (cost=17.440..258.910 rows=1\n width=186) (actual time=0.437..0.447 rows=1 loops=1)\n                     ->  Nested Loop  (cost=17.440..258.590\n rows=1 width=154) (actual time=0.417..0.425 rows=1 loops=1)\n                           ->  Nested Loop \n (cost=17.440..252.240 rows=1 width=160) (actual\n time=0.388..0.395 rows=1 loops=1)\n                                   Join Filter:\n ((alpha_yankee.bravo_papa)::text =\n (six_zulu.kilo_uniform)::text)\n                                 ->  Nested Loop \n (cost=0.000..108.990 rows=7 width=137) (actual time=0.107..0.109\n rows=1 loops=1)\n                                       ->  Nested Loop \n (cost=0.000..102.780 rows=10 width=124) (actual\n time=0.077..0.078 rows=1 loops=1)\n                                             ->  Index Scan\n using juliet_yankee on alpha_yankee  (cost=0.000..18.240 rows=13\n width=84) (actual time=0.043..0.044 rows=1 loops=1)\n                                                     Index Cond:\n ((bravo_charlie)::text = 'tango'::text)\n                                             ->  Index Scan\n using charlie_quebec on juliet_three  (cost=0.000..6.490 rows=1\n width=40) (actual time=0.029..0.029 rows=1 loops=1)\n                                                     Index Cond:\n ((echo)::text = (alpha_yankee.kilo_yankee)::text)\n                                                     Filter:\n ((four)::text = 'alpha_whiskey'::text)\n                                       ->  Index Scan using\n charlie_yankee on romeo  (cost=0.000..0.610 rows=1 width=33)\n (actual time=0.027..0.027 rows=1 loops=1)\n                                               Index Cond:\n ((echo)::text = (juliet_three.lima_victor)::text)\n                                               Filter:\n ((lima_bravo)::text = 'alpha_whiskey'::text)\n                                 ->  Bitmap Heap Scan on papa\n six_zulu  (cost=17.440..20.450 rows=1 width=64) (actual\n time=0.268..0.276 rows=21 loops=1)\n                                         Recheck Cond:\n (((charlie_victor)::text = (juliet_three.echo)::text) AND\n ((bravo_india)::text = (alpha_yankee.juliet_whiskey)::text))\n                                       ->  BitmapAnd \n (cost=17.440..17.440 rows=1 width=0) (actual time=0.255..0.255\n rows=0 loops=1)\n                                             ->  Bitmap Index\n Scan on alpha_foxtrot  (cost=0.000..6.710 rows=28 width=0)\n (actual time=0.048..0.048 rows=21 loops=1)\n                                                     Index Cond:\n ((charlie_victor)::text = (juliet_three.echo)::text)\n                                             ->  Bitmap Index\n Scan on delta  (cost=0.000..10.440 rows=108 width=0) (actual\n time=0.202..0.202 rows=757 loops=1)\n                                                     Index Cond:\n ((bravo_india)::text = (alpha_yankee.juliet_whiskey)::text)\n                           ->  Index Scan using whiskey_india\n on alpha_lima  (cost=0.000..6.340 rows=1 width=57) (actual\n time=0.026..0.027 rows=1 loops=1)\n                                   Index Cond: ((echo)::text =\n (six_zulu.bravo_india)::text)\n                     ->  Index Scan using golf on\n whiskey_oscar  (cost=0.000..0.310 rows=1 width=43) (actual\n time=0.017..0.018 rows=1 loops=1)\n                             Index Cond: ((echo)::text =\n (alpha_lima.whiskey_six)::text)\n               ->  Index Scan using bravo_foxtrot on mike_mike\n lima_charlie  (cost=0.000..6.270 rows=1 width=617) (actual\n time=0.020..0.020 rows=1 loops=1)\n                       Index Cond: ((echo)::text =\n (six_zulu.kilo_uniform)::text)\n         ->  Index Scan using charlie_papa on mike_oscar \n (cost=0.000..0.270 rows=1 width=530) (actual time=0.008..0.008\n rows=1 loops=1)\n                 Index Cond: ((echo)::text =\n (lima_charlie.yankee)::text)\n   ->  Index Scan using juliet_victor on juliet_six six_seven \n (cost=0.000..7.080 rows=1 width=556) (actual time=0.033..0.034\n rows=1 loops=1)\n           Index Cond: ((echo)::text = 'tango'::text)\n  Total runtime: 0.871 ms\n----------------------------------------------------------------------------------------\n\n ----------- Query plan for view defined WITH ::text\n ---------------\nNested Loop  (cost=176136.470..3143249.320\n rows=1 width=1810) (actual time=16357.306..815170.609 rows=1\n loops=1)\n     Join Filter: ((alpha_yankee.bravo_charlie)::text =\n (six_seven.echo)::text)\n   ->  Index Scan using juliet_victor on juliet_six six_seven \n (cost=0.000..7.080 rows=1 width=556) (actual time=0.035..0.038\n rows=1 loops=1)\n           Index Cond: ((echo)::text = 'tango'::text)\n   ->  Nested Loop  (cost=176136.470..3143242.190 rows=2\n width=1275) (actual time=13071.765..812874.705 rows=6815445\n loops=1)\n         ->  Nested Loop  (cost=176136.470..3143241.560 rows=2\n width=1243) (actual time=13071.742..760766.802 rows=6815445\n loops=1)\n               ->  Hash Join  (cost=176136.470..3143228.850\n rows=2 width=1249) (actual time=13071.694..632785.712\n rows=6815445 loops=1)\n                       Hash Cond: ((six_zulu.kilo_uniform)::text\n = (lima_charlie.echo)::text)\n                     ->  Merge Join \n (cost=174404.520..3141496.860 rows=2 width=160) (actual\n time=13023.713..619787.785 rows=6815445 loops=1)\n                             Merge Cond:\n ((juliet_three.echo)::text = (six_zulu.charlie)::text)\n                             Join Filter:\n (((alpha_yankee.juliet_whiskey)::text =\n (six_zulu.bravo_india)::text) AND\n ((alpha_yankee.bravo_papa)::text =\n (six_zulu.kilo_uniform)::text))\n                           ->  Merge Join \n (cost=174391.960..1013040.600 rows=5084399 width=137) (actual\n time=13023.660..68777.622 rows=7805731 loops=1)\n                                   Merge Cond:\n ((alpha_yankee.kilo_yankee)::text = (juliet_three.echo)::text)\n                                 ->  Index Scan using five on\n alpha_yankee  (cost=0.000..739352.050 rows=9249725 width=84)\n (actual time=0.027..14936.587 rows=9204640 loops=1)\n                                 ->  Sort \n (cost=174391.750..175895.330 rows=601433 width=53) (actual\n time=13023.526..14415.639 rows=7952904 loops=1)\n                                         Sort Key:\n juliet_three.echo\n                                         Sort Method: quicksort \n Memory: 139105kB\n                                       ->  Hash Join \n (cost=62105.650..116660.060 rows=601433 width=53) (actual\n time=669.244..2278.990 rows=814472 loops=1)\n                                               Hash Cond:\n ((juliet_three.lima_victor)::text = (romeo.echo)::text)\n                                             ->  Seq Scan on\n juliet_three  (cost=0.000..40391.860 rows=814822 width=40)\n (actual time=0.009..677.160 rows=814473 loops=1)\n                                                     Filter:\n ((four)::text = 'alpha_whiskey'::text)\n                                             ->  Hash \n (cost=57562.740..57562.740 rows=363433 width=33) (actual\n time=668.812..668.812 rows=363736 loops=1)\n                                                     Buckets:\n 65536  Batches: 1  Memory Usage: 23832kB\n                                                   ->  Seq\n Scan on romeo  (cost=0.000..57562.740 rows=363433 width=33)\n (actual time=0.012..489.104 rows=363736 loops=1)\n                                                          \n Filter: ((lima_bravo)::text = 'alpha_whiskey'::text)\n                           ->  Materialize \n (cost=0.000..1192114.040 rows=10236405 width=64) (actual\n time=0.030..72475.323 rows=140608673 loops=1)\n                                 ->  Index Scan using\n alpha_foxtrot on papa six_zulu  (cost=0.000..1166523.030\n rows=10236405 width=64) (actual time=0.024..22466.849\n rows=10176345 loops=1)\n                     ->  Hash  (cost=1568.500..1568.500\n rows=13076 width=1131) (actual time=47.954..47.954 rows=13054\n loops=1)\n                             Buckets: 2048  Batches: 1  Memory\n Usage: 13551kB\n                           ->  Hash Join \n (cost=19.950..1568.500 rows=13076 width=1131) (actual\n time=0.415..24.461 rows=13054 loops=1)\n                                   Hash Cond:\n ((lima_charlie.yankee)::text = (mike_oscar.echo)::text)\n                                 ->  Seq Scan on mike_mike\n lima_charlie  (cost=0.000..1368.760 rows=13076 width=617)\n (actual time=0.006..5.948 rows=13054 loops=1)\n                                 ->  Hash \n (cost=18.310..18.310 rows=131 width=530) (actual\n time=0.397..0.397 rows=131 loops=1)\n                                         Buckets: 1024  Batches:\n 1  Memory Usage: 73kB\n                                       ->  Seq Scan on\n mike_oscar  (cost=0.000..18.310 rows=131 width=530) (actual\n time=0.007..0.221 rows=131 loops=1)\n               ->  Index Scan using whiskey_india on\n alpha_lima  (cost=0.000..6.340 rows=1 width=57) (actual\n time=0.017..0.017 rows=1 loops=6815445)\n                       Index Cond: ((echo)::text =\n (six_zulu.bravo_india)::text)\n         ->  Index Scan using golf on whiskey_oscar \n (cost=0.000..0.310 rows=1 width=43) (actual time=0.006..0.006\n rows=1 loops=6815445)\n                 Index Cond: ((echo)::text =\n (alpha_lima.whiskey_six)::text)\n  Total runtime: 815589.464 ms\n----------------------------------------------------------------------------------------\n\n If I set enable_nestloop = off, then it works perfectly, however I\n don't really have the option to do this, I'd rather see what's\n causing it.\n\n Any thoughts?\n\n Thanks,\n - Brian F", "msg_date": "Thu, 05 Sep 2013 16:46:55 -0600", "msg_from": "Brian Fehrle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View with and without ::text casting performs differently." }, { "msg_contents": "Brian Fehrle <[email protected]> writes:\n> I have a view, that when created with our create statement works \n> wonderfully, a query on the view with a standard where clause that \n> narrows the result to a single row performs in under a single ms. \n> However, when we export this view and re-import it (dump and restore of \n> the database, which happens often), the exported version of the view has \n> been modified by Postgres to include various typecasting of some columns \n> to text.\n\nThis is normal (varchar doesn't actually have any operations of its own).\n\n> All columns that it typecasts to text are varchar(20), so there is \n> nothing wrong in what it's doing there. However, with the view \n> definition including the ::text casting, the query planner changes and \n> it goes into a nested loop, taking a query from <1ms to over ten minutes.\n\nI rather doubt that the now-explicit-instead-of-implicit casts have much\nto do with that. It seems more likely that you forgot to re-ANALYZE in\nthe new database, or there are some different planner settings, or\nsomething along that line.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 05 Sep 2013 19:50:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View with and without ::text casting performs differently." }, { "msg_contents": "On 09/05/2013 05:50 PM, Tom Lane wrote:\n> Brian Fehrle <[email protected]> writes:\n>> I have a view, that when created with our create statement works\n>> wonderfully, a query on the view with a standard where clause that\n>> narrows the result to a single row performs in under a single ms.\n>> However, when we export this view and re-import it (dump and restore of\n>> the database, which happens often), the exported version of the view has\n>> been modified by Postgres to include various typecasting of some columns\n>> to text.\n> This is normal (varchar doesn't actually have any operations of its own).\n>\n>> All columns that it typecasts to text are varchar(20), so there is\n>> nothing wrong in what it's doing there. However, with the view\n>> definition including the ::text casting, the query planner changes and\n>> it goes into a nested loop, taking a query from <1ms to over ten minutes.\n> I rather doubt that the now-explicit-instead-of-implicit casts have much\n> to do with that. It seems more likely that you forgot to re-ANALYZE in\n> the new database, or there are some different planner settings, or\n> something along that line.\nI have two versions of the view in place on the same server, one with \nthe typecasting and one without, and this is where I see the differences \n(no ::text runs in 0.5ms and with ::text runs in 13 or so minutes with \nnested loop), so it's all running off the same statistics on the data.\n\nRunning an analyse on all tables involved did not change the query plan \non the 'bad' version of the view (default_statistics_target = 400)\n\n- Brian F\n>\n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 06 Sep 2013 10:42:09 -0600", "msg_from": "Brian Fehrle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View with and without ::text casting performs differently." }, { "msg_contents": "Brian Fehrle <[email protected]> writes:\n> On 09/05/2013 05:50 PM, Tom Lane wrote:\n>> I rather doubt that the now-explicit-instead-of-implicit casts have much\n>> to do with that. It seems more likely that you forgot to re-ANALYZE in\n>> the new database, or there are some different planner settings, or\n>> something along that line.\n\n> I have two versions of the view in place on the same server, one with \n> the typecasting and one without, and this is where I see the differences \n> (no ::text runs in 0.5ms and with ::text runs in 13 or so minutes with \n> nested loop), so it's all running off the same statistics on the data.\n\nHm. Can you provide a self-contained example?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 06 Sep 2013 14:35:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View with and without ::text casting performs differently." }, { "msg_contents": "Good Afternoon,\n\nI also came across this too.\nThe issue goes away if you keep your join columns the same data type on\nboth tables.\nThe nested loop happens when the join columns are not the same data type.\nHope this helps.\n\nBest\n-Mark\n\n\nOn Fri, Sep 6, 2013 at 2:35 PM, Tom Lane <[email protected]> wrote:\n\n> Brian Fehrle <[email protected]> writes:\n> > On 09/05/2013 05:50 PM, Tom Lane wrote:\n> >> I rather doubt that the now-explicit-instead-of-implicit casts have much\n> >> to do with that. It seems more likely that you forgot to re-ANALYZE in\n> >> the new database, or there are some different planner settings, or\n> >> something along that line.\n>\n> > I have two versions of the view in place on the same server, one with\n> > the typecasting and one without, and this is where I see the differences\n> > (no ::text runs in 0.5ms and with ::text runs in 13 or so minutes with\n> > nested loop), so it's all running off the same statistics on the data.\n>\n> Hm. Can you provide a self-contained example?\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nGood Afternoon,I also came across this too.The issue goes away if you keep your join columns the same data type on both tables.The nested loop happens when the join columns are not the same data type.\nHope this helps.Best-MarkOn Fri, Sep 6, 2013 at 2:35 PM, Tom Lane <[email protected]> wrote:\nBrian Fehrle <[email protected]> writes:\n\n> On 09/05/2013 05:50 PM, Tom Lane wrote:\n>> I rather doubt that the now-explicit-instead-of-implicit casts have much\n>> to do with that.  It seems more likely that you forgot to re-ANALYZE in\n>> the new database, or there are some different planner settings, or\n>> something along that line.\n\n> I have two versions of the view in place on the same server, one with\n> the typecasting and one without, and this is where I see the differences\n> (no ::text runs in 0.5ms and with ::text runs in 13 or so minutes with\n> nested loop), so it's all running off the same statistics on the data.\n\nHm.  Can you provide a self-contained example?\n\n                        regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 6 Sep 2013 15:46:37 -0400", "msg_from": "Mark Mayo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View with and without ::text casting performs differently." }, { "msg_contents": "*Sorry correction.\nI meant the Materialize disappears when the join columns are the same data\ntype.\n\n\nOn Fri, Sep 6, 2013 at 3:46 PM, Mark Mayo <[email protected]> wrote:\n\n> Good Afternoon,\n>\n> I also came across this too.\n> The issue goes away if you keep your join columns the same data type on\n> both tables.\n> The nested loop happens when the join columns are not the same data type.\n> Hope this helps.\n>\n> Best\n> -Mark\n>\n>\n> On Fri, Sep 6, 2013 at 2:35 PM, Tom Lane <[email protected]> wrote:\n>\n>> Brian Fehrle <[email protected]> writes:\n>> > On 09/05/2013 05:50 PM, Tom Lane wrote:\n>> >> I rather doubt that the now-explicit-instead-of-implicit casts have\n>> much\n>> >> to do with that. It seems more likely that you forgot to re-ANALYZE in\n>> >> the new database, or there are some different planner settings, or\n>> >> something along that line.\n>>\n>> > I have two versions of the view in place on the same server, one with\n>> > the typecasting and one without, and this is where I see the differences\n>> > (no ::text runs in 0.5ms and with ::text runs in 13 or so minutes with\n>> > nested loop), so it's all running off the same statistics on the data.\n>>\n>> Hm. Can you provide a self-contained example?\n>>\n>> regards, tom lane\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\n*Sorry correction.I meant the Materialize disappears when the join columns are the same data type.On Fri, Sep 6, 2013 at 3:46 PM, Mark Mayo <[email protected]> wrote:\nGood Afternoon,I also came across this too.The issue goes away if you keep your join columns the same data type on both tables.\nThe nested loop happens when the join columns are not the same data type.\nHope this helps.Best-Mark\nOn Fri, Sep 6, 2013 at 2:35 PM, Tom Lane <[email protected]> wrote:\nBrian Fehrle <[email protected]> writes:\n\n\n> On 09/05/2013 05:50 PM, Tom Lane wrote:\n>> I rather doubt that the now-explicit-instead-of-implicit casts have much\n>> to do with that.  It seems more likely that you forgot to re-ANALYZE in\n>> the new database, or there are some different planner settings, or\n>> something along that line.\n\n> I have two versions of the view in place on the same server, one with\n> the typecasting and one without, and this is where I see the differences\n> (no ::text runs in 0.5ms and with ::text runs in 13 or so minutes with\n> nested loop), so it's all running off the same statistics on the data.\n\nHm.  Can you provide a self-contained example?\n\n                        regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 6 Sep 2013 15:49:42 -0400", "msg_from": "Mark Mayo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View with and without ::text casting performs differently." }, { "msg_contents": "On 09/06/2013 12:35 PM, Tom Lane wrote:\n> Brian Fehrle <[email protected]> writes:\n>> On 09/05/2013 05:50 PM, Tom Lane wrote:\n>>> I rather doubt that the now-explicit-instead-of-implicit casts have much\n>>> to do with that. It seems more likely that you forgot to re-ANALYZE in\n>>> the new database, or there are some different planner settings, or\n>>> something along that line.\n>> I have two versions of the view in place on the same server, one with\n>> the typecasting and one without, and this is where I see the differences\n>> (no ::text runs in 0.5ms and with ::text runs in 13 or so minutes with\n>> nested loop), so it's all running off the same statistics on the data.\n> Hm. Can you provide a self-contained example?\n\nI'll see what I can do to recreate this with bogus data. It's sensitive \ndata that may just be some sort of anomaly in terms of the data \ndistribution that is causing it.\n\n- Brian F\n>\n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 18 Sep 2013 12:50:05 -0600", "msg_from": "Brian Fehrle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View with and without ::text casting performs differently." } ]
[ { "msg_contents": "Hi. I have one query which possibly is not optimized by planner (not using index for aggregate having clause restriction):\n\nexplain\nSELECT stocktaking_id\nFROM t_weighting\nGROUP BY stocktaking_id\nHAVING MIN(stat_item_start) BETWEEN '2013-08-01' AND '2013-09-01';\n\nwith result:\n\"HashAggregate (cost=59782.43..59787.39 rows=248 width=32)\"\n\" Filter: ((min(stat_item_start) >= '2013-08-01 00:00:00'::timestamp without time zone) AND (min(stat_item_start) <= '2013-09-01 00:00:00'::timestamp without time zone))\"\n\" -> Seq Scan on t_weighting (cost=0.00..49002.39 rows=1437339 width=32)\"\n\nI have probably an obvious tough, that query will touch only rows with stat_item_start values only within given constrains in having clause. If (and only if) planner have some info that MIN and MAX aggregate functions could return only one of values that comes into them, it can search only rows within given constraints in having part of select. Something like this:\n\n\nexplain\nSELECT stocktaking_id\nFROM t_weighting\n--added restriction by hand:\nWHERE stat_item_start BETWEEN '2013-08-01' AND '2013-09-01'\nGROUP BY stocktaking_id\nHAVING MIN(stat_item_start) BETWEEN '2013-08-01' AND '2013-09-01';\n\nwith result:\n\"HashAggregate (cost=8.45..8.47 rows=1 width=32)\"\n\" Filter: ((min(stat_item_start) >= '2013-08-01 00:00:00'::timestamp without time zone) AND (min(stat_item_start) <= '2013-09-01 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using idx_t_weighting_stat_item_start on t_weighting (cost=0.00..8.44 rows=1 width=32)\"\n\" Index Cond: ((stat_item_start >= '2013-08-01 00:00:00'::timestamp without time zone) AND (stat_item_start <= '2013-09-01 00:00:00'::timestamp without time zone))\"\n\nIs this optimization by planner possible, or it is already have been done on newer DB version (I am using PostgreSQL 8.4.13)? IMHO it should be added into planner if possible for all built in aggregate functions.\n\nBest regards,\n--\nIng. Ľubomír Varga\n+421 (0)908 541 700\[email protected]\nwww.plaintext.sk\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 6 Sep 2013 09:05:22 +0100 (GMT+01:00)", "msg_from": "=?utf-8?Q?=C4=BDubom=C3=ADr_Varga?= <[email protected]>", "msg_from_op": true, "msg_subject": "planner and having clausule" } ]
[ { "msg_contents": "Hi again, my mistake. I have found why there are not this optimization (thus I have found other one, correct, see bellow). I can have for example:\n\nstocktaking_id | stat_item_start\n------------------------------------\nabc | 2013-01-01\nabc | 2013-08-08\n\nAnd when applied my \"optimization\", it will return me abc (minimum for abc is 2013-01-01 and it does not conform having restriction, but I have applied where restriction to date which broke my result...)\n\nProper optimization should be:\n\nexplain\nSELECT stocktaking_id\nFROM t_weighting\n--proper optimization restriction\nWHERE stocktaking_id IN (SELECT DISTINCT stocktaking_id FROM t_weighting WHERE stat_item_start BETWEEN '2013-08-01' AND '2013-09-01')\nGROUP BY stocktaking_id\nHAVING MIN(stat_item_start) BETWEEN '2013-08-01' AND '2013-09-01';\n\nwith result:\n\"HashAggregate (cost=15485.12..15490.08 rows=248 width=32)\"\n\" Filter: ((min(public.t_weighting.stat_item_start) >= '2013-08-01 00:00:00'::timestamp without time zone) AND (min(public.t_weighting.stat_item_start) <= '2013-09-01 00:00:00'::timestamp without time zone))\"\n\" -> Nested Loop (cost=222.05..15441.65 rows=5796 width=32)\"\n\" -> HashAggregate (cost=8.47..8.48 rows=1 width=32)\"\n\" -> Subquery Scan \"ANY_subquery\" (cost=8.45..8.47 rows=1 width=32)\"\n\" -> HashAggregate (cost=8.45..8.46 rows=1 width=24)\"\n\" -> Index Scan using idx_t_weighting_stat_item_start on t_weighting (cost=0.00..8.44 rows=1 width=24)\"\n\" Index Cond: ((stat_item_start >= '2013-08-01 00:00:00'::timestamp without time zone) AND (stat_item_start <= '2013-09-01 00:00:00'::timestamp without time zone))\"\n\" -> Bitmap Heap Scan on t_weighting (cost=213.58..15360.73 rows=5796 width=32)\"\n\" Recheck Cond: ((public.t_weighting.stocktaking_id)::text = (\"ANY_subquery\".stocktaking_id)::text)\"\n\" -> Bitmap Index Scan on idx_t_weighting_stocktaking_id_user_id (cost=0.00..212.13 rows=5796 width=0)\"\n\" Index Cond: ((public.t_weighting.stocktaking_id)::text = (\"ANY_subquery\".stocktaking_id)::text)\"\n\n\nThis will be probably a little bit harder to use in planner in general manner.\n\nBest regards,\n--\nIng. Ľubomír Varga\n+421 (0)908 541 700\[email protected]\nwww.plaintext.sk\n\n----- \"Ľubomír Varga\" <[email protected]> wrote:\n\n> Hi. I have one query which possibly is not optimized by planner (not\n> using index for aggregate having clause restriction):\n> \n> explain\n> SELECT stocktaking_id\n> FROM t_weighting\n> GROUP BY stocktaking_id\n> HAVING MIN(stat_item_start) BETWEEN '2013-08-01' AND '2013-09-01';\n> \n> with result:\n> \"HashAggregate (cost=59782.43..59787.39 rows=248 width=32)\"\n> \" Filter: ((min(stat_item_start) >= '2013-08-01 00:00:00'::timestamp\n> without time zone) AND (min(stat_item_start) <= '2013-09-01\n> 00:00:00'::timestamp without time zone))\"\n> \" -> Seq Scan on t_weighting (cost=0.00..49002.39 rows=1437339\n> width=32)\"\n> \n> I have probably an obvious tough, that query will touch only rows with\n> stat_item_start values only within given constrains in having clause.\n> If (and only if) planner have some info that MIN and MAX aggregate\n> functions could return only one of values that comes into them, it can\n> search only rows within given constraints in having part of select.\n> Something like this:\n> \n> \n> explain\n> SELECT stocktaking_id\n> FROM t_weighting\n> --added restriction by hand:\n> WHERE stat_item_start BETWEEN '2013-08-01' AND '2013-09-01'\n> GROUP BY stocktaking_id\n> HAVING MIN(stat_item_start) BETWEEN '2013-08-01' AND '2013-09-01';\n> \n> with result:\n> \"HashAggregate (cost=8.45..8.47 rows=1 width=32)\"\n> \" Filter: ((min(stat_item_start) >= '2013-08-01 00:00:00'::timestamp\n> without time zone) AND (min(stat_item_start) <= '2013-09-01\n> 00:00:00'::timestamp without time zone))\"\n> \" -> Index Scan using idx_t_weighting_stat_item_start on t_weighting\n> (cost=0.00..8.44 rows=1 width=32)\"\n> \" Index Cond: ((stat_item_start >= '2013-08-01\n> 00:00:00'::timestamp without time zone) AND (stat_item_start <=\n> '2013-09-01 00:00:00'::timestamp without time zone))\"\n> \n> Is this optimization by planner possible, or it is already have been\n> done on newer DB version (I am using PostgreSQL 8.4.13)? IMHO it\n> should be added into planner if possible for all built in aggregate\n> functions.\n> \n> Best regards,\n> --\n> Ing. Ľubomír Varga\n> +421 (0)908 541 700\n> [email protected]\n> www.plaintext.sk\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 6 Sep 2013 09:44:20 +0100 (GMT+01:00)", "msg_from": "=?utf-8?Q?=C4=BDubom=C3=ADr_Varga?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner and having clausule" } ]
[ { "msg_contents": "Dear All,\n\nI'm dealing with restore 3 DB at the same time. Previously this task was sequential but we need to make shorter as possible our daily maintenance window.\nIs this possible from your point of view to restore on the same server more than 1 DB at time? I havwn't found any clear answer on the web.\nMany thanks in advance.\n\nBR\nRoberto\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 6 Sep 2013 17:52:07 +0200 (CEST)", "msg_from": "Roberto Grandi <[email protected]>", "msg_from_op": true, "msg_subject": "RESTORE multiple DBs concurrently" }, { "msg_contents": "On Sat, Sep 7, 2013 at 12:52 AM, Roberto Grandi\n<[email protected]> wrote:\n> Is this possible from your point of view to restore on the same server more than 1 DB at time?\nYes, it is possible: simply run multiple instances of pg_restore in\nparallel and just don't blow up your disk(s) I/O.\n\nAlso, why not restoring one database at the same time and accelerating\na single restore with parallel jobs? If your dump format is compatible\nwith this option, use -j when running pg_restore to define a number of\nconcurrent jobs. More info here:\nhttp://www.postgresql.org/docs/9.2/interactive/app-pgrestore.html\nThis might be useful as well in your case.\n--\nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 7 Sep 2013 18:46:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RESTORE multiple DBs concurrently" } ]
[ { "msg_contents": "Let me clarify further - when we reconstruct our schema (without the upgrade step) via a sql script, the problem still persists. Restoring an upgraded DB which contains existing data into exactly the same instance will correct the behavior. \n\n--------\n\nMel Llaguno | Principal Engineer (Performance/Deployment)\nCoverity | 800 6th Avenue S.W. | Suite 410 | Calgary, AB | Canada | T2P 3G3\[email protected]\n\n________________________________________\nFrom: [email protected]\n[[email protected]] on behalf of Andrew Dunstan\n[[email protected]]\nSent: Monday, September 09, 2013 6:38 PM\nTo: Jeff Janes\nCc: Josh Berkus; Amit Kapila; [email protected]\nSubject: Re: [PERFORM] Performance bug in prepared statement binding in\n9.2?\n\nOn 08/01/2013 03:20 PM, Jeff Janes wrote:\n> On Thu, Aug 1, 2013 at 10:58 AM, Josh Berkus <[email protected]> wrote:\n>> Amit, All:\n>>\n>> So we just retested this on 9.3b2. The performance is the same as 9.1\n>> and 9.2; that is, progressively worse as the test cycles go on, and\n>> unacceptably slow compared to 8.4.\n>>\n>> Some issue introduced in 9.1 is causing BINDs to get progressively\n>> slower as the PARSEs BINDs get run repeatedly. Per earlier on this\n>> thread, that can bloat to 200X time required for a BIND, and it's\n>> definitely PostgreSQL-side.\n>>\n>> I'm trying to produce a test case which doesn't involve the user's\n>> application. However, hints on other things to analyze would be keen.\n> Does it seem to be all CPU time (it is hard to imagine what else it\n> would be, but...)\n>\n> Could you use oprofile or perf or gprof to get a profile of the\n> backend during a run? That should quickly narrow it down to which C\n> function has the problem.\n>\n> Did you test 9.0 as well?\n\n\nThis has been tested back to 9.0. What we have found is that the problem\ndisappears if the database has come in via dump/restore, but is present\nif it is the result of pg_upgrade. There are some long-running\ntransactions also running alongside this - we are currently planning a\ntest where those are not present. We're also looking at constructing a\nself-contained test case.\n\nHere is some perf output from the bad case:\n\n + 14.67% postgres [.] heap_hot_search_buffer\n + 11.45% postgres [.] LWLockAcquire\n + 8.39% postgres [.] LWLockRelease\n + 6.60% postgres [.] _bt_checkkeys\n + 6.39% postgres [.] PinBuffer\n + 5.96% postgres [.] hash_search_with_hash_value\n + 5.43% postgres [.] hash_any\n + 5.14% postgres [.] UnpinBuffer\n + 3.43% postgres [.] ReadBuffer_common\n + 2.34% postgres [.] index_fetch_heap\n + 2.04% postgres [.] heap_page_prune_opt\n + 2.00% libc-2.15.so [.] 0x8041b\n + 1.94% postgres [.] _bt_next\n + 1.83% postgres [.] btgettuple\n + 1.76% postgres [.] index_getnext_tid\n + 1.70% postgres [.] LockBuffer\n + 1.54% postgres [.] ReadBufferExtended\n + 1.25% postgres [.] FunctionCall2Coll\n + 1.14% postgres [.] HeapTupleSatisfiesNow\n + 1.09% postgres [.] ReleaseAndReadBuffer\n + 0.94% postgres [.] ResourceOwnerForgetBuffer\n + 0.81% postgres [.] _bt_saveitem\n + 0.80% postgres [.] _bt_readpage\n + 0.79% [kernel.kallsyms] [k] 0xffffffff81170861\n + 0.64% postgres [.] CheckForSerializableConflictOut\n + 0.60% postgres [.] ResourceOwnerEnlargeBuffers\n + 0.59% postgres [.] BufTableLookup\n\nand here is the good case:\n\n + 9.54% libc-2.15.so [.] 0x15eb1f\n + 7.31% [kernel.kallsyms] [k] 0xffffffff8117924b\n + 5.65% postgres [.] AllocSetAlloc\n + 3.57% postgres [.] SearchCatCache\n + 2.67% postgres [.] hash_search_with_hash_value\n + 1.69% postgres [.] base_yyparse\n + 1.49% libc-2.15.so [.] vfprintf\n + 1.34% postgres [.] MemoryContextAllocZeroAligned\n + 1.34% postgres [.] XLogInsert\n + 1.24% postgres [.] copyObject\n + 1.10% postgres [.] palloc\n + 1.09% postgres [.] _bt_compare\n + 1.04% postgres [.] core_yylex\n + 0.96% postgres [.] _bt_checkkeys\n + 0.95% postgres [.] expression_tree_walker\n + 0.88% postgres [.] ScanKeywordLookup\n + 0.87% postgres [.] pg_encoding_mbcliplen\n + 0.86% postgres [.] LWLockAcquire\n + 0.72% postgres [.] nocachegetattr\n + 0.67% postgres [.] FunctionCall2Coll\n + 0.63% postgres [.] fmgr_info_cxt_security\n + 0.62% postgres [.] hash_any\n + 0.62% postgres [.] ExecInitExpr\n + 0.58% postgres [.] hash_uint32\n + 0.55% postgres [.] PostgresMain\n + 0.55% postgres [.] LWLockRelease\n + 0.54% postgres [.] lappend\n + 0.52% postgres [.] slot_deform_tuple\n + 0.50% postgres [.] PinBuffer\n + 0.48% postgres [.] AllocSetFree\n + 0.46% postgres [.] check_stack_depth\n + 0.44% postgres [.] DirectFunctionCall1Coll\n + 0.43% postgres [.] ExecScanHashBucket\n + 0.36% postgres [.] deconstruct_array\n + 0.36% postgres [.] CatalogCacheComputeHashValue\n + 0.35% postgres [.] pfree\n + 0.33% libc-2.15.so [.] _IO_default_xsputn\n + 0.32% libc-2.15.so [.] malloc\n + 0.32% postgres [.] TupleDescInitEntry\n + 0.30% postgres [.] new_tail_cell.isra.2\n + 0.30% libm-2.15.so [.] 0x5898\n + 0.30% postgres [.] LockAcquireExtended\n + 0.30% postgres [.] _bt_first\n + 0.29% postgres [.] add_paths_to_joinrel\n + 0.28% postgres [.] MemoryContextCreate\n + 0.28% postgres [.] appendBinaryStringInfo\n + 0.28% postgres [.] MemoryContextStrdup\n + 0.27% postgres [.] heap_hot_search_buffer\n + 0.27% postgres [.] GetSnapshotData\n + 0.26% postgres [.] hash_search\n + 0.26% postgres [.] heap_getsysattr\n + 0.26% [vdso] [.] 0x7fff681ff70c\n + 0.25% postgres [.] compare_scalars\n + 0.25% postgres [.] pg_verify_mbstr_len\n\n\n\ncheers\n\nandrew\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Sep 2013 01:36:27 +0000", "msg_from": "Mel Llaguno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance bug in prepared statement binding in 9.2?" }, { "msg_contents": "On Tue, Sep 10, 2013 at 01:36:27AM +0000, Mel Llaguno wrote:\n> Let me clarify further - when we reconstruct our schema (without the\n> upgrade step) via a sql script, the problem still persists. Restoring\n> an upgraded DB which contains existing data into exactly the same\n> instance will correct the behavior.\n\nI do not understand what you are saying above.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 9 Sep 2013 21:45:41 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance bug in prepared statement binding in 9.2?" } ]
[ { "msg_contents": "We're currently using an embedded PG 8.4.17 for our application. Our PG 9.x tests consists of one of the following :\n\n- Take a 8.4.17 DB which contains only our application schema and required seed data and use pg_upgrade to create a 9.x database directory, run the analyze_new_cluster.sh script and fire up the application. Under this set of conditions, the bind connection issue is present during our test.\n\n- Start with a fresh PG 9.x DB (using use create_db) and use psql to recreate our application schema and required seed data. When the application is started and our test executed, the bind connection issue is still present.\n\nIn both of the above cases, a base application schema is used. \n\nIf I upgrade an 8.4.17 DB that contains additional application data (generated by interaction with our application) to 9.x, the bind connection issue is no longer present. Restoring this upgraded 9.x DB into any PG instance in the previously described scenarios also seems to fix the bind connection issue.\n\nPlease let me know if this clarifies my earlier post. \n\nThanks, M.\n\nMel Llaguno | Principal Engineer (Performance/Deployment)\nCoverity | 800 6th Avenue S.W. | Suite 410 | Calgary, AB | Canada | T2P 3G3\[email protected]\n________________________________________\nFrom: Bruce Momjian [[email protected]]\nSent: Monday, September 09, 2013 7:45 PM\nTo: Mel Llaguno\nCc: Andrew Dunstan; Jeff Janes; Josh Berkus; Amit Kapila;\[email protected]\nSubject: Re: [PERFORM] Performance bug in prepared statement binding in\n9.2?\n\nOn Tue, Sep 10, 2013 at 01:36:27AM +0000, Mel Llaguno wrote:\n> Let me clarify further - when we reconstruct our schema (without the\n> upgrade step) via a sql script, the problem still persists. Restoring\n> an upgraded DB which contains existing data into exactly the same\n> instance will correct the behavior.\n\nI do not understand what you are saying above.\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Sep 2013 02:06:08 +0000", "msg_from": "Mel Llaguno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance bug in prepared statement binding in 9.2?" }, { "msg_contents": "On Tue, Sep 10, 2013 at 02:06:08AM +0000, Mel Llaguno wrote:\n> We're currently using an embedded PG 8.4.17 for our application. Our\n> PG 9.x tests consists of one of the following :\n>\n> - Take a 8.4.17 DB which contains only our application schema and\n> required seed data and use pg_upgrade to create a 9.x database\n> directory, run the analyze_new_cluster.sh script and fire up the\n> application. Under this set of conditions, the bind connection issue\n> is present during our test.\n>\n> - Start with a fresh PG 9.x DB (using use create_db) and use psql\n> to recreate our application schema and required seed data. When the\n> application is started and our test executed, the bind connection\n> issue is still present.\n>\n> In both of the above cases, a base application schema is used.\n>\n> If I upgrade an 8.4.17 DB that contains additional application data\n> (generated by interaction with our application) to 9.x, the bind\n> connection issue is no longer present. Restoring this upgraded 9.x DB\n> into any PG instance in the previously described scenarios also seems\n> to fix the bind connection issue.\n>\n> Please let me know if this clarifies my earlier post.\n\nYes, that is clear. So it is the seed data that is causing the problem?\nThat is the only different I see from #2 and #3.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 9 Sep 2013 22:16:44 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance bug in prepared statement binding in 9.2?" } ]
[ { "msg_contents": "Bruce,\n\nFirst of all, I'd like to thank you for taking some interest in this issue. We'd love to migrate to the latest PG version, but this issue is currently preventing us from doing so.\n\nRegardless of the DB used (base application schema _or_ restored DB with additional app data + base application schema), seed information is present in all tests. I guess my question is this - why would having existing data change the bind behavior at all? Is it possible that the way indexes are created has changed between 8.4 -> 9.x? \n\nThanks, M.\n\nMel Llaguno | Principal Engineer (Performance/Deployment)\nCoverity | 800 6th Avenue S.W. | Suite 410 | Calgary, AB | Canada | T2P 3G3\[email protected]\n________________________________________\nFrom: Bruce Momjian\n[[email protected]]\nSent: Monday, September 09, 2013 8:16 PM\nTo: Mel\nLlaguno\nCc: [email protected]\nSubject: Re: [PERFORM]\nPerformance bug in prepared statement binding in 9.2?\n\nOn Tue, Sep 10, 2013\nat 02:06:08AM +0000, Mel Llaguno wrote:\n> We're currently using an embedded\nPG 8.4.17 for our application. Our\n> PG 9.x tests consists of one of the\nfollowing :\n>\n> - Take a 8.4.17 DB which contains only our application\nschema and\n> required seed data and use pg_upgrade to create a 9.x\ndatabase\n> directory, run the analyze_new_cluster.sh script and fire up\nthe\n> application. Under this set of conditions, the bind connection issue\n>\nis present during our test.\n>\n> - Start with a fresh PG 9.x DB (using use\ncreate_db) and use psql\n> to recreate our application schema and required\nseed data. When the\n> application is started and our test executed, the bind\nconnection\n> issue is still present.\n>\n> In both of the above cases, a base\napplication schema is used.\n>\n> If I upgrade an 8.4.17 DB that contains\nadditional application data\n> (generated by interaction with our\napplication) to 9.x, the bind\n> connection issue is no longer present.\nRestoring this upgraded 9.x DB\n> into any PG instance in the previously\ndescribed scenarios also seems\n> to fix the bind connection issue.\n>\n>\nPlease let me know if this clarifies my earlier post.\n\nYes, that is clear.\nSo it is the seed data that is causing the problem?\nThat is the only\ndifferent I see from #2 and #3.\n\n--\n Bruce Momjian <[email protected]>\nhttp://momjian.us\n EnterpriseDB\nhttp://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Sep 2013 05:43:16 +0000", "msg_from": "Mel Llaguno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance bug in prepared statement binding in 9.2?" }, { "msg_contents": "On Tue, Sep 10, 2013 at 05:43:16AM +0000, Mel Llaguno wrote:\n> Bruce,\n>\n> First of all, I'd like to thank you for taking some interest in this\n> issue. We'd love to migrate to the latest PG version, but this issue\n> is currently preventing us from doing so.\n>\n> Regardless of the DB used (base application schema _or_ restored DB\n> with additional app data + base application schema), seed information\n> is present in all tests. I guess my question is this - why would\n> having existing data change the bind behavior at all? Is it possible\n> that the way indexes are created has changed between 8.4 -> 9.x?\n\nI don't know as you have not shown us exactly what is slower between\nversions --- you only said the bug appears or not in certain\ncircumstances.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Sep 2013 07:50:18 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance bug in prepared statement binding in 9.2?" } ]
[ { "msg_contents": "Hi All,\n\nI've been seeing a strange issue with our Postgres install for about a year\nnow, and I was hoping someone might be able to help point me at the cause.\nAt what seem like fairly random intervals Postgres will become unresponsive\nto the 3 application nodes it services. These periods tend to last for 10 -\n15 minutes before everything rights itself and the system goes back to\nnormal.\n\nDuring these periods the server will report a spike in the outbound\nbandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\ncontext switches / interrupts (normal peaks are around 2k/8k respectively,\nand during these periods they‘ve gone to 15k/22k), and a load average of\n100+. CPU usage stays relatively low, but it’s all system time reported,\nuser time goes to zero. It doesn‘t seem to be disk related since we’re\nrunning with a shared_buffers setting of 24G, which will fit just about our\nentire database into memory, and the IO transactions reported by the\nserver, as well as the disk reads reported by Postgres stay consistently\nlow.\n\nWe‘ve recently started tracking how long statements take to execute, and\nwe’re seeing some really odd numbers. A simple delete by primary key, for\nexample, from a table that contains about 280,000 rows, reportedly took\n18h59m46.900s. An update by primary key in that same table was reported as\n7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\nnumbers don't seem reasonable at all.\n\nSome other changes we've made to postgresql.conf:\n\nsynchronous_commit = off\n\nmaintenance_work_mem = 1GB\nwal_level = hot_standby\nwal_buffers = 16MB\n\nmax_wal_senders = 10\n\nwal_keep_segments = 5000\n\ncheckpoint_segments = 128\n\ncheckpoint_timeout = 30min\n\ncheckpoint_completion_target = 0.9\n\nmax_connections = 500\n\nThe server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\nRAM, running Cent OS 6.3.\n\nSo far we‘ve tried disabling Transparent Huge Pages after I found a number\nof resources online that indicated similar interrupt/context switch issues,\nbut it hasn’t resolve the problem. I managed to catch it happening once and\nrun a perf which showed:\n\n\n+ 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n+ 9.55% 10956 postmaster 0x2dc820 f\nset_config_option\n+ 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n+ 5.75% 6609 postmaster 0x5a2b0 f\nginHeapTupleFastCollect\n+ 2.68% 3084 postmaster 0x192483 f\nbuild_implied_join_equality\n+ 2.61% 2990 postmaster 0x187a55 f\nbuild_paths_for_OR\n+ 1.86% 2131 postmaster 0x794aa f\nget_collation_oid\n+ 1.56% 1822 postmaster 0x5a67e f\nginHeapTupleFastInsert\n+ 1.53% 1766 postmaster 0x1929bc f\ndistribute_qual_to_rels\n+ 1.33% 1558 postmaster 0x249671 f cmp_numerics\n\nI‘m not sure what 0x347ba9 represents, or why it’s an address rather than a\nmethod name.\n\nThat's about the sum of it. Any help would be greatly appreciated and if\nyou want any more information about our setup, please feel free to ask.\n\nThanks,\nDave\n\nHi All,\nI've been seeing a strange issue with our Postgres install for about a year now, and I was hoping someone might be able to help point me at the cause. At what seem like fairly random intervals Postgres will become unresponsive to the 3 application nodes it services. These periods tend to last for 10 - 15 minutes before everything rights itself and the system goes back to normal. \nDuring these periods the server will report a spike in the outbound bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in context switches / interrupts (normal peaks are around 2k/8k respectively, and during these periods they‘ve gone to 15k/22k), and a load average of 100+. CPU usage stays relatively low, but it’s all system time reported, user time goes to zero. It doesn‘t seem to be disk related since we’re running with a shared_buffers setting of 24G, which will fit just about our entire database into memory, and the IO transactions reported by the server, as well as the disk reads reported by Postgres stay consistently low.\nWe‘ve recently started tracking how long statements take to execute, and we’re seeing some really odd numbers. A simple delete by primary key, for example, from a table that contains about 280,000 rows, reportedly took 18h59m46.900s. An update by primary key in that same table was reported as 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those numbers don't seem reasonable at all.\nSome other changes we've made to postgresql.conf:\nsynchronous_commit = off\nmaintenance_work_mem = 1GBwal_level = hot_standbywal_buffers = 16MB\nmax_wal_senders = 10\nwal_keep_segments = 5000\ncheckpoint_segments = 128\ncheckpoint_timeout = 30min\ncheckpoint_completion_target = 0.9\nmax_connections = 500\nThe server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of RAM, running Cent OS 6.3. \nSo far we‘ve tried disabling Transparent Huge Pages after I found a number of resources online that indicated similar interrupt/context switch issues, but it hasn’t resolve the problem. I managed to catch it happening once and run a perf which showed:\n\n+ 41.40% 48154 postmaster 0x347ba9 f 0x347ba9 \n+ 9.55% 10956 postmaster 0x2dc820 f set_config_option \n+ 8.64% 9946 postmaster 0x5a3d4 f writeListPage \n+ 5.75% 6609 postmaster 0x5a2b0 f ginHeapTupleFastCollect \n+ 2.68% 3084 postmaster 0x192483 f build_implied_join_equality \n+ 2.61% 2990 postmaster 0x187a55 f build_paths_for_OR \n+ 1.86% 2131 postmaster 0x794aa f get_collation_oid \n+ 1.56% 1822 postmaster 0x5a67e f ginHeapTupleFastInsert \n+ 1.53% 1766 postmaster 0x1929bc f distribute_qual_to_rels \n+ 1.33% 1558 postmaster 0x249671 f cmp_numerics\nI‘m not sure what 0x347ba9 represents, or why it’s an address rather than a method name.\nThat's about the sum of it. Any help would be greatly appreciated and if you want any more information about our setup, please feel free to ask.\nThanks,Dave", "msg_date": "Tue, 10 Sep 2013 11:04:21 -0400", "msg_from": "David Whittaker <[email protected]>", "msg_from_op": true, "msg_subject": "Intermittent hangs with 9.2" }, { "msg_contents": "\nOn 09/10/2013 11:04 AM, David Whittaker wrote:\n>\n> Hi All,\n>\n> I've been seeing a strange issue with our Postgres install for about a \n> year now, and I was hoping someone might be able to help point me at \n> the cause. At what seem like fairly random intervals Postgres will \n> become unresponsive to the 3 application nodes it services. These \n> periods tend to last for 10 - 15 minutes before everything rights \n> itself and the system goes back to normal.\n>\n> During these periods the server will report a spike in the outbound \n> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike \n> in context switches / interrupts (normal peaks are around 2k/8k \n> respectively, and during these periods they‘ve gone to 15k/22k), and a \n> load average of 100+. CPU usage stays relatively low, but it’s all \n> system time reported, user time goes to zero. It doesn‘t seem to be \n> disk related since we’re running with a shared_buffers setting of 24G, \n> which will fit just about our entire database into memory, and the IO \n> transactions reported by the server, as well as the disk reads \n> reported by Postgres stay consistently low.\n>\n> We‘ve recently started tracking how long statements take to execute, \n> and we’re seeing some really odd numbers. A simple delete by primary \n> key, for example, from a table that contains about 280,000 rows, \n> reportedly took 18h59m46.900s. An update by primary key in that same \n> table was reported as 7d 17h 58m 30.415s. That table is frequently \n> accessed, but obviously those numbers don't seem reasonable at all.\n>\n> Some other changes we've made to postgresql.conf:\n>\n> synchronous_commit = off\n>\n> maintenance_work_mem = 1GB\n> wal_level = hot_standby\n> wal_buffers = 16MB\n>\n> max_wal_senders = 10\n>\n> wal_keep_segments = 5000\n>\n> checkpoint_segments = 128\n>\n> checkpoint_timeout = 30min\n>\n> checkpoint_completion_target = 0.9\n>\n> max_connections = 500\n>\n> The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB \n> of RAM, running Cent OS 6.3.\n>\n> So far we‘ve tried disabling Transparent Huge Pages after I found a \n> number of resources online that indicated similar interrupt/context \n> switch issues, but it hasn’t resolve the problem. I managed to catch \n> it happening once and run a perf which showed:\n>\n> |\n> + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n> + 9.55% 10956 postmaster 0x2dc820 f set_config_option\n> + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n> + 5.75% 6609 postmaster 0x5a2b0 f ginHeapTupleFastCollect\n> + 2.68% 3084 postmaster 0x192483 f build_implied_join_equality\n> + 2.61% 2990 postmaster 0x187a55 f build_paths_for_OR\n> + 1.86% 2131 postmaster 0x794aa f get_collation_oid\n> + 1.56% 1822 postmaster 0x5a67e f ginHeapTupleFastInsert\n> + 1.53% 1766 postmaster 0x1929bc f distribute_qual_to_rels\n> + 1.33% 1558 postmaster 0x249671 f cmp_numerics|\n>\n> I‘m not sure what 0x347ba9 represents, or why it’s an address rather \n> than a method name.\n>\n> That's about the sum of it. Any help would be greatly appreciated and \n> if you want any more information about our setup, please feel free to ask.\n>\n>\n\nI have seen cases like this with very high shared_buffers settings.\n\n24Gb for shared_buffers is quite high, especially on a 48Gb box. What \nhappens if you dial that back to, say, 12Gb?\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Sep 2013 11:26:30 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "On Tue, Sep 10, 2013 at 11:04:21AM -0400, David Whittaker wrote:\n> Hi All,\n> \n> I've been seeing a strange issue with our Postgres install for about a year\n> now, and I was hoping someone might be able to help point me at the cause.\n> At what seem like fairly random intervals Postgres will become unresponsive\n> to the 3 application nodes it services. These periods tend to last for 10 -\n> 15 minutes before everything rights itself and the system goes back to\n> normal.\n> \n> During these periods the server will report a spike in the outbound\n> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n> context switches / interrupts (normal peaks are around 2k/8k respectively,\n> and during these periods they‘ve gone to 15k/22k), and a load average of\n> 100+. CPU usage stays relatively low, but it’s all system time reported,\n> user time goes to zero. It doesn‘t seem to be disk related since we’re\n> running with a shared_buffers setting of 24G, which will fit just about our\n> entire database into memory, and the IO transactions reported by the\n> server, as well as the disk reads reported by Postgres stay consistently\n> low.\n> \n> We‘ve recently started tracking how long statements take to execute, and\n> we’re seeing some really odd numbers. A simple delete by primary key, for\n> example, from a table that contains about 280,000 rows, reportedly took\n> 18h59m46.900s. An update by primary key in that same table was reported as\n> 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n> numbers don't seem reasonable at all.\n> \n> Some other changes we've made to postgresql.conf:\n> \n> synchronous_commit = off\n> \n> maintenance_work_mem = 1GB\n> wal_level = hot_standby\n> wal_buffers = 16MB\n> \n> max_wal_senders = 10\n> \n> wal_keep_segments = 5000\n> \n> checkpoint_segments = 128\n> \n> checkpoint_timeout = 30min\n> \n> checkpoint_completion_target = 0.9\n> \n> max_connections = 500\n> \n> The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n> RAM, running Cent OS 6.3.\n> \n> So far we‘ve tried disabling Transparent Huge Pages after I found a number\n> of resources online that indicated similar interrupt/context switch issues,\n> but it hasn’t resolve the problem. I managed to catch it happening once and\n> run a perf which showed:\n> \n> \n> + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n> + 9.55% 10956 postmaster 0x2dc820 f\n> set_config_option\n> + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n> + 5.75% 6609 postmaster 0x5a2b0 f\n> ginHeapTupleFastCollect\n> + 2.68% 3084 postmaster 0x192483 f\n> build_implied_join_equality\n> + 2.61% 2990 postmaster 0x187a55 f\n> build_paths_for_OR\n> + 1.86% 2131 postmaster 0x794aa f\n> get_collation_oid\n> + 1.56% 1822 postmaster 0x5a67e f\n> ginHeapTupleFastInsert\n> + 1.53% 1766 postmaster 0x1929bc f\n> distribute_qual_to_rels\n> + 1.33% 1558 postmaster 0x249671 f cmp_numerics\n> \n> I‘m not sure what 0x347ba9 represents, or why it’s an address rather than a\n> method name.\n> \n> That's about the sum of it. Any help would be greatly appreciated and if\n> you want any more information about our setup, please feel free to ask.\n> \n> Thanks,\n> Dave\n\nHi Dave,\n\nA load average of 100+ means that you have that many processes waiting to\nrun yet you only have 16 cpus. You really need to consider using a connection\npooler like pgbouncer to keep your connection count in the 16-32 range.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Sep 2013 10:33:48 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "On Tue, Sep 10, 2013 at 8:04 AM, David Whittaker <[email protected]> wrote:\n\n> Hi All,\n>\n> I've been seeing a strange issue with our Postgres install for about a\n> year now, and I was hoping someone might be able to help point me at the\n> cause. At what seem like fairly random intervals Postgres will become\n> unresponsive to the 3 application nodes it services. These periods tend to\n> last for 10 - 15 minutes before everything rights itself and the system\n> goes back to normal.\n>\n> During these periods the server will report a spike in the outbound\n> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n> context switches / interrupts (normal peaks are around 2k/8k respectively,\n> and during these periods they‘ve gone to 15k/22k), and a load average of\n> 100+.\n>\n\nI'm curious about the spike it outbound network usage. If the database is\nhung and no longer responding to queries, what is it getting sent over the\nnetwork? Can you snoop on that traffic?\n\n\n> CPU usage stays relatively low, but it’s all system time reported, user\n> time goes to zero. It doesn‘t seem to be disk related since we’re running\n> with a shared_buffers setting of 24G, which will fit just about our entire\n> database into memory, and the IO transactions reported by the server, as\n> well as the disk reads reported by Postgres stay consistently low.\n>\nThere have been reports that using very large shared_buffers can cause a\nlot of contention issues in the kernel, for some kernels. The usual advice\nis not to set shared_buffers above 8GB. The operating system can use the\nrest of the memory to cache for you.\n\nAlso, using a connection pooler and lowering the number of connections to\nthe real database has solved problems like this before.\n\n\n> We‘ve recently started tracking how long statements take to execute, and\n> we’re seeing some really odd numbers. A simple delete by primary key, for\n> example, from a table that contains about 280,000 rows, reportedly took\n> 18h59m46.900s. An update by primary key in that same table was reported as\n> 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n> numbers don't seem reasonable at all.\n>\nHow are your tracking those? Is it log_min_duration_statement or something\nelse?\n\nCheers,\n\nJeff\n\nOn Tue, Sep 10, 2013 at 8:04 AM, David Whittaker <[email protected]> wrote:\n\nHi All,\nI've been seeing a strange issue with our Postgres install for about a year now, and I was hoping someone might be able to help point me at the cause. At what seem like fairly random intervals Postgres will become unresponsive to the 3 application nodes it services. These periods tend to last for 10 - 15 minutes before everything rights itself and the system goes back to normal. \nDuring these periods the server will report a spike in the outbound bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in context switches / interrupts (normal peaks are around 2k/8k respectively, and during these periods they‘ve gone to 15k/22k), and a load average of 100+.\nI'm curious about the spike it outbound network usage.  If the database is hung and no longer responding to queries, what is it getting sent over the network?  Can you snoop on that traffic?\n \n CPU usage stays relatively low, but it’s all system time reported, user time goes to zero. It doesn‘t seem to be disk related since we’re running with a shared_buffers setting of 24G, which will fit just about our entire database into memory, and the IO transactions reported by the server, as well as the disk reads reported by Postgres stay consistently low.\nThere have been reports that using very large shared_buffers can cause a lot of contention issues in the kernel, for some kernels. The usual advice is not to set shared_buffers above 8GB.  The operating system can use the rest of the memory to cache for you.\nAlso, using a connection pooler and lowering the number of connections to the real database has solved problems like this before. \n\nWe‘ve recently started tracking how long statements take to execute, and we’re seeing some really odd numbers. A simple delete by primary key, for example, from a table that contains about 280,000 rows, reportedly took 18h59m46.900s. An update by primary key in that same table was reported as 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those numbers don't seem reasonable at all.\nHow are your tracking those?  Is it log_min_duration_statement or something else?Cheers,Jeff", "msg_date": "Tue, 10 Sep 2013 10:44:37 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "On Tue, Sep 10, 2013 at 10:04 AM, David Whittaker <[email protected]> wrote:\n> Hi All,\n>\n> I've been seeing a strange issue with our Postgres install for about a year\n> now, and I was hoping someone might be able to help point me at the cause.\n> At what seem like fairly random intervals Postgres will become unresponsive\n> to the 3 application nodes it services. These periods tend to last for 10 -\n> 15 minutes before everything rights itself and the system goes back to\n> normal.\n>\n> During these periods the server will report a spike in the outbound\n> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n> context switches / interrupts (normal peaks are around 2k/8k respectively,\n> and during these periods they‘ve gone to 15k/22k), and a load average of\n> 100+. CPU usage stays relatively low, but it’s all system time reported,\n> user time goes to zero. It doesn‘t seem to be disk related since we’re\n> running with a shared_buffers setting of 24G, which will fit just about our\n> entire database into memory, and the IO transactions reported by the server,\n> as well as the disk reads reported by Postgres stay consistently low.\n>\n> We‘ve recently started tracking how long statements take to execute, and\n> we’re seeing some really odd numbers. A simple delete by primary key, for\n> example, from a table that contains about 280,000 rows, reportedly took\n> 18h59m46.900s. An update by primary key in that same table was reported as\n> 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n> numbers don't seem reasonable at all.\n>\n> Some other changes we've made to postgresql.conf:\n>\n> synchronous_commit = off\n>\n> maintenance_work_mem = 1GB\n> wal_level = hot_standby\n> wal_buffers = 16MB\n>\n> max_wal_senders = 10\n>\n> wal_keep_segments = 5000\n>\n> checkpoint_segments = 128\n>\n> checkpoint_timeout = 30min\n>\n> checkpoint_completion_target = 0.9\n>\n> max_connections = 500\n>\n> The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n> RAM, running Cent OS 6.3.\n>\n> So far we‘ve tried disabling Transparent Huge Pages after I found a number\n> of resources online that indicated similar interrupt/context switch issues,\n> but it hasn’t resolve the problem. I managed to catch it happening once and\n> run a perf which showed:\n>\n> + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n> + 9.55% 10956 postmaster 0x2dc820 f set_config_option\n> + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n> + 5.75% 6609 postmaster 0x5a2b0 f\n> ginHeapTupleFastCollect\n> + 2.68% 3084 postmaster 0x192483 f\n> build_implied_join_equality\n> + 2.61% 2990 postmaster 0x187a55 f build_paths_for_OR\n> + 1.86% 2131 postmaster 0x794aa f get_collation_oid\n> + 1.56% 1822 postmaster 0x5a67e f ginHeapTupleFastInsert\n> + 1.53% 1766 postmaster 0x1929bc f\n> distribute_qual_to_rels\n> + 1.33% 1558 postmaster 0x249671 f cmp_numerics\n>\n> I‘m not sure what 0x347ba9 represents, or why it’s an address rather than a\n> method name.\n>\n> That's about the sum of it. Any help would be greatly appreciated and if you\n> want any more information about our setup, please feel free to ask.\n\n\nReducing shared buffers to around 2gb will probably make the problem go away\n\n*) What's your ratio reads to writes (approximately)?\n\n*) How many connections when it happens. Do connections pile on after that?\n\n*) Are you willing to run custom patched postmaster to help\ntroubleshoot the problem?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 07:43:35 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "On 2013-09-11 07:43:35 -0500, Merlin Moncure wrote:\n> > I've been seeing a strange issue with our Postgres install for about a year\n> > now, and I was hoping someone might be able to help point me at the cause.\n> > At what seem like fairly random intervals Postgres will become unresponsive\n> > to the 3 application nodes it services. These periods tend to last for 10 -\n> > 15 minutes before everything rights itself and the system goes back to\n> > normal.\n> >\n> > During these periods the server will report a spike in the outbound\n> > bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n> > context switches / interrupts (normal peaks are around 2k/8k respectively,\n> > and during these periods they‘ve gone to 15k/22k), and a load average of\n> > 100+. CPU usage stays relatively low, but it’s all system time reported,\n> > user time goes to zero. It doesn‘t seem to be disk related since we’re\n> > running with a shared_buffers setting of 24G, which will fit just about our\n> > entire database into memory, and the IO transactions reported by the server,\n> > as well as the disk reads reported by Postgres stay consistently low.\n> >\n> > We‘ve recently started tracking how long statements take to execute, and\n> > we’re seeing some really odd numbers. A simple delete by primary key, for\n> > example, from a table that contains about 280,000 rows, reportedly took\n> > 18h59m46.900s. An update by primary key in that same table was reported as\n> > 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n> > numbers don't seem reasonable at all.\n> >\n> > Some other changes we've made to postgresql.conf:\n> >\n> > synchronous_commit = off\n> >\n> > maintenance_work_mem = 1GB\n> > wal_level = hot_standby\n> > wal_buffers = 16MB\n> >\n> > max_wal_senders = 10\n> >\n> > wal_keep_segments = 5000\n> >\n> > checkpoint_segments = 128\n> >\n> > checkpoint_timeout = 30min\n> >\n> > checkpoint_completion_target = 0.9\n> >\n> > max_connections = 500\n> >\n> > The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n> > RAM, running Cent OS 6.3.\n> >\n> > So far we‘ve tried disabling Transparent Huge Pages after I found a number\n> > of resources online that indicated similar interrupt/context switch issues,\n> > but it hasn’t resolve the problem. I managed to catch it happening once and\n> > run a perf which showed:\n> >\n> > + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n> > + 9.55% 10956 postmaster 0x2dc820 f set_config_option\n> > + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n> > + 5.75% 6609 postmaster 0x5a2b0 f\n> > ginHeapTupleFastCollect\n> > + 2.68% 3084 postmaster 0x192483 f\n> > build_implied_join_equality\n> > + 2.61% 2990 postmaster 0x187a55 f build_paths_for_OR\n> > + 1.86% 2131 postmaster 0x794aa f get_collation_oid\n> > + 1.56% 1822 postmaster 0x5a67e f ginHeapTupleFastInsert\n> > + 1.53% 1766 postmaster 0x1929bc f\n> > distribute_qual_to_rels\n> > + 1.33% 1558 postmaster 0x249671 f cmp_numerics\n> >\n> > I‘m not sure what 0x347ba9 represents, or why it’s an address rather than a\n> > method name.\n\nTry converting it to something more meaningful with \"addr2line\", that\noften has more sucess.\n\n> > That's about the sum of it. Any help would be greatly appreciated and if you\n> > want any more information about our setup, please feel free to ask.\n\n> Reducing shared buffers to around 2gb will probably make the problem go away\n\nThat profile doesn't really look like one of the problem you are\nreferring to would look like.\n\nBased on the profile I'd guess it's possible that you're seing problems\nwith GIN's \"fastupdate\" mechanism.\nTry ALTER INDEX whatever SET (FASTUPDATE = OFF); VACUUM\nwhatever's_table for all gin indexes.\n\nIt's curious that set_config_option is so high in the profile... Any\nchance you could recompile postgres with -fno-omit-frame-pointers in\nCFLAGS? That would allow you to use perf -g. The performance price of\nthat usually is below 1% for postgres.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 19:17:36 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "On Wed, Sep 11, 2013 at 12:17 PM, Andres Freund <[email protected]> wrote:\n> On 2013-09-11 07:43:35 -0500, Merlin Moncure wrote:\n>> > I've been seeing a strange issue with our Postgres install for about a year\n>> > now, and I was hoping someone might be able to help point me at the cause.\n>> > At what seem like fairly random intervals Postgres will become unresponsive\n>> > to the 3 application nodes it services. These periods tend to last for 10 -\n>> > 15 minutes before everything rights itself and the system goes back to\n>> > normal.\n>> >\n>> > During these periods the server will report a spike in the outbound\n>> > bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n>> > context switches / interrupts (normal peaks are around 2k/8k respectively,\n>> > and during these periods they‘ve gone to 15k/22k), and a load average of\n>> > 100+. CPU usage stays relatively low, but it’s all system time reported,\n>> > user time goes to zero. It doesn‘t seem to be disk related since we’re\n>> > running with a shared_buffers setting of 24G, which will fit just about our\n>> > entire database into memory, and the IO transactions reported by the server,\n>> > as well as the disk reads reported by Postgres stay consistently low.\n>> >\n>> > We‘ve recently started tracking how long statements take to execute, and\n>> > we’re seeing some really odd numbers. A simple delete by primary key, for\n>> > example, from a table that contains about 280,000 rows, reportedly took\n>> > 18h59m46.900s. An update by primary key in that same table was reported as\n>> > 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n>> > numbers don't seem reasonable at all.\n>> >\n>> > Some other changes we've made to postgresql.conf:\n>> >\n>> > synchronous_commit = off\n>> >\n>> > maintenance_work_mem = 1GB\n>> > wal_level = hot_standby\n>> > wal_buffers = 16MB\n>> >\n>> > max_wal_senders = 10\n>> >\n>> > wal_keep_segments = 5000\n>> >\n>> > checkpoint_segments = 128\n>> >\n>> > checkpoint_timeout = 30min\n>> >\n>> > checkpoint_completion_target = 0.9\n>> >\n>> > max_connections = 500\n>> >\n>> > The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n>> > RAM, running Cent OS 6.3.\n>> >\n>> > So far we‘ve tried disabling Transparent Huge Pages after I found a number\n>> > of resources online that indicated similar interrupt/context switch issues,\n>> > but it hasn’t resolve the problem. I managed to catch it happening once and\n>> > run a perf which showed:\n>> >\n>> > + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n>> > + 9.55% 10956 postmaster 0x2dc820 f set_config_option\n>> > + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n>> > + 5.75% 6609 postmaster 0x5a2b0 f\n>> > ginHeapTupleFastCollect\n>> > + 2.68% 3084 postmaster 0x192483 f\n>> > build_implied_join_equality\n>> > + 2.61% 2990 postmaster 0x187a55 f build_paths_for_OR\n>> > + 1.86% 2131 postmaster 0x794aa f get_collation_oid\n>> > + 1.56% 1822 postmaster 0x5a67e f ginHeapTupleFastInsert\n>> > + 1.53% 1766 postmaster 0x1929bc f\n>> > distribute_qual_to_rels\n>> > + 1.33% 1558 postmaster 0x249671 f cmp_numerics\n>> >\n>> > I‘m not sure what 0x347ba9 represents, or why it’s an address rather than a\n>> > method name.\n>\n> Try converting it to something more meaningful with \"addr2line\", that\n> often has more sucess.\n>\n>> > That's about the sum of it. Any help would be greatly appreciated and if you\n>> > want any more information about our setup, please feel free to ask.\n>\n>> Reducing shared buffers to around 2gb will probably make the problem go away\n>\n> That profile doesn't really look like one of the problem you are\n> referring to would look like.\n\nyup -- I think you're right.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 13:26:04 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "Hi All,\n\nWe lowered shared_buffers to 8G and increased effective_cache_size\naccordingly. So far, we haven't seen any issues since the adjustment. The\nissues have come and gone in the past, so I'm not convinced it won't crop\nup again, but I think the best course is to wait a week or so and see how\nthings work out before we make any other changes.\n\nThank you all for your help, and if the problem does reoccur, we'll look\ninto the other options suggested, like using a patched postmaster and\ncompiling for perf -g.\n\nThanks again, I really appreciate the feedback from everyone.\n\n-Dave\n\n\nOn Wed, Sep 11, 2013 at 1:17 PM, Andres Freund <[email protected]>wrote:\n\n> On 2013-09-11 07:43:35 -0500, Merlin Moncure wrote:\n> > > I've been seeing a strange issue with our Postgres install for about a\n> year\n> > > now, and I was hoping someone might be able to help point me at the\n> cause.\n> > > At what seem like fairly random intervals Postgres will become\n> unresponsive\n> > > to the 3 application nodes it services. These periods tend to last for\n> 10 -\n> > > 15 minutes before everything rights itself and the system goes back to\n> > > normal.\n> > >\n> > > During these periods the server will report a spike in the outbound\n> > > bandwidth (from about 1mbs to about 5mbs most recently), a huge spike\n> in\n> > > context switches / interrupts (normal peaks are around 2k/8k\n> respectively,\n> > > and during these periods they‘ve gone to 15k/22k), and a load average\n> of\n> > > 100+. CPU usage stays relatively low, but it’s all system time\n> reported,\n> > > user time goes to zero. It doesn‘t seem to be disk related since we’re\n> > > running with a shared_buffers setting of 24G, which will fit just\n> about our\n> > > entire database into memory, and the IO transactions reported by the\n> server,\n> > > as well as the disk reads reported by Postgres stay consistently low.\n> > >\n> > > We‘ve recently started tracking how long statements take to execute,\n> and\n> > > we’re seeing some really odd numbers. A simple delete by primary key,\n> for\n> > > example, from a table that contains about 280,000 rows, reportedly took\n> > > 18h59m46.900s. An update by primary key in that same table was\n> reported as\n> > > 7d 17h 58m 30.415s. That table is frequently accessed, but obviously\n> those\n> > > numbers don't seem reasonable at all.\n> > >\n> > > Some other changes we've made to postgresql.conf:\n> > >\n> > > synchronous_commit = off\n> > >\n> > > maintenance_work_mem = 1GB\n> > > wal_level = hot_standby\n> > > wal_buffers = 16MB\n> > >\n> > > max_wal_senders = 10\n> > >\n> > > wal_keep_segments = 5000\n> > >\n> > > checkpoint_segments = 128\n> > >\n> > > checkpoint_timeout = 30min\n> > >\n> > > checkpoint_completion_target = 0.9\n> > >\n> > > max_connections = 500\n> > >\n> > > The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB\n> of\n> > > RAM, running Cent OS 6.3.\n> > >\n> > > So far we‘ve tried disabling Transparent Huge Pages after I found a\n> number\n> > > of resources online that indicated similar interrupt/context switch\n> issues,\n> > > but it hasn’t resolve the problem. I managed to catch it happening\n> once and\n> > > run a perf which showed:\n> > >\n> > > + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n> > > + 9.55% 10956 postmaster 0x2dc820 f set_config_option\n> > > + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n> > > + 5.75% 6609 postmaster 0x5a2b0 f\n> > > ginHeapTupleFastCollect\n> > > + 2.68% 3084 postmaster 0x192483 f\n> > > build_implied_join_equality\n> > > + 2.61% 2990 postmaster 0x187a55 f\n> build_paths_for_OR\n> > > + 1.86% 2131 postmaster 0x794aa f get_collation_oid\n> > > + 1.56% 1822 postmaster 0x5a67e f\n> ginHeapTupleFastInsert\n> > > + 1.53% 1766 postmaster 0x1929bc f\n> > > distribute_qual_to_rels\n> > > + 1.33% 1558 postmaster 0x249671 f cmp_numerics\n> > >\n> > > I‘m not sure what 0x347ba9 represents, or why it’s an address rather\n> than a\n> > > method name.\n>\n> Try converting it to something more meaningful with \"addr2line\", that\n> often has more sucess.\n>\n> > > That's about the sum of it. Any help would be greatly appreciated and\n> if you\n> > > want any more information about our setup, please feel free to ask.\n>\n> > Reducing shared buffers to around 2gb will probably make the problem go\n> away\n>\n> That profile doesn't really look like one of the problem you are\n> referring to would look like.\n>\n> Based on the profile I'd guess it's possible that you're seing problems\n> with GIN's \"fastupdate\" mechanism.\n> Try ALTER INDEX whatever SET (FASTUPDATE = OFF); VACUUM\n> whatever's_table for all gin indexes.\n>\n> It's curious that set_config_option is so high in the profile... Any\n> chance you could recompile postgres with -fno-omit-frame-pointers in\n> CFLAGS? That would allow you to use perf -g. The performance price of\n> that usually is below 1% for postgres.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> --\n> Andres Freund http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nHi All,We lowered shared_buffers to 8G and increased effective_cache_size accordingly.  So far, we haven't seen any issues since the adjustment.  The issues have come and gone in the past, so I'm not convinced it won't crop up again, but I think the best course is to wait a week or so and see how things work out before we make any other changes.\nThank you all for your help, and if the problem does reoccur, we'll look into the other options suggested, like using a patched postmaster and compiling for perf -g.Thanks again, I really appreciate the feedback from everyone.\n-DaveOn Wed, Sep 11, 2013 at 1:17 PM, Andres Freund <[email protected]> wrote:\nOn 2013-09-11 07:43:35 -0500, Merlin Moncure wrote:\n> > I've been seeing a strange issue with our Postgres install for about a year\n> > now, and I was hoping someone might be able to help point me at the cause.\n> > At what seem like fairly random intervals Postgres will become unresponsive\n> > to the 3 application nodes it services. These periods tend to last for 10 -\n> > 15 minutes before everything rights itself and the system goes back to\n> > normal.\n> >\n> > During these periods the server will report a spike in the outbound\n> > bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n> > context switches / interrupts (normal peaks are around 2k/8k respectively,\n> > and during these periods they‘ve gone to 15k/22k), and a load average of\n> > 100+. CPU usage stays relatively low, but it’s all system time reported,\n> > user time goes to zero. It doesn‘t seem to be disk related since we’re\n> > running with a shared_buffers setting of 24G, which will fit just about our\n> > entire database into memory, and the IO transactions reported by the server,\n> > as well as the disk reads reported by Postgres stay consistently low.\n> >\n> > We‘ve recently started tracking how long statements take to execute, and\n> > we’re seeing some really odd numbers. A simple delete by primary key, for\n> > example, from a table that contains about 280,000 rows, reportedly took\n> > 18h59m46.900s. An update by primary key in that same table was reported as\n> > 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n> > numbers don't seem reasonable at all.\n> >\n> > Some other changes we've made to postgresql.conf:\n> >\n> > synchronous_commit = off\n> >\n> > maintenance_work_mem = 1GB\n> > wal_level = hot_standby\n> > wal_buffers = 16MB\n> >\n> > max_wal_senders = 10\n> >\n> > wal_keep_segments = 5000\n> >\n> > checkpoint_segments = 128\n> >\n> > checkpoint_timeout = 30min\n> >\n> > checkpoint_completion_target = 0.9\n> >\n> > max_connections = 500\n> >\n> > The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n> > RAM, running Cent OS 6.3.\n> >\n> > So far we‘ve tried disabling Transparent Huge Pages after I found a number\n> > of resources online that indicated similar interrupt/context switch issues,\n> > but it hasn’t resolve the problem. I managed to catch it happening once and\n> > run a perf which showed:\n> >\n> > +  41.40%       48154  postmaster  0x347ba9         f 0x347ba9\n> > +   9.55%       10956  postmaster  0x2dc820         f set_config_option\n> > +   8.64%        9946  postmaster  0x5a3d4          f writeListPage\n> > +   5.75%        6609  postmaster  0x5a2b0          f\n> > ginHeapTupleFastCollect\n> > +   2.68%        3084  postmaster  0x192483         f\n> > build_implied_join_equality\n> > +   2.61%        2990  postmaster  0x187a55         f build_paths_for_OR\n> > +   1.86%        2131  postmaster  0x794aa          f get_collation_oid\n> > +   1.56%        1822  postmaster  0x5a67e          f ginHeapTupleFastInsert\n> > +   1.53%        1766  postmaster  0x1929bc         f\n> > distribute_qual_to_rels\n> > +   1.33%        1558  postmaster  0x249671         f cmp_numerics\n> >\n> > I‘m not sure what 0x347ba9 represents, or why it’s an address rather than a\n> > method name.\n\nTry converting it to something more meaningful with \"addr2line\", that\noften has more sucess.\n\n> > That's about the sum of it. Any help would be greatly appreciated and if you\n> > want any more information about our setup, please feel free to ask.\n\n> Reducing shared buffers to around 2gb will probably make the problem go away\n\nThat profile doesn't really look like one of the problem you are\nreferring to would look like.\n\nBased on the profile I'd guess it's possible that you're seing problems\nwith GIN's \"fastupdate\" mechanism.\nTry ALTER INDEX whatever SET (FASTUPDATE = OFF); VACUUM\nwhatever's_table for all gin indexes.\n\nIt's curious that set_config_option is so high in the profile... Any\nchance you could recompile postgres with -fno-omit-frame-pointers in\nCFLAGS? That would allow you to use perf -g. The performance price of\nthat usually is below 1% for postgres.\n\nGreetings,\n\nAndres Freund\n\n--\n Andres Freund                     http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 12 Sep 2013 16:06:22 -0400", "msg_from": "David Whittaker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "On Thu, Sep 12, 2013 at 3:06 PM, David Whittaker <[email protected]> wrote:\n> Hi All,\n>\n> We lowered shared_buffers to 8G and increased effective_cache_size\n> accordingly. So far, we haven't seen any issues since the adjustment. The\n> issues have come and gone in the past, so I'm not convinced it won't crop up\n> again, but I think the best course is to wait a week or so and see how\n> things work out before we make any other changes.\n>\n> Thank you all for your help, and if the problem does reoccur, we'll look\n> into the other options suggested, like using a patched postmaster and\n> compiling for perf -g.\n>\n> Thanks again, I really appreciate the feedback from everyone.\n\nInteresting -- please respond with a follow up if/when you feel\nsatisfied the problem has gone away. Andres was right; I initially\nmis-diagnosed the problem (there is another issue I'm chasing that has\na similar performance presentation but originates from a different\narea of the code).\n\nThat said, if reducing shared_buffers made *your* problem go away as\nwell, then this more evidence that we have an underlying contention\nmechanic that is somehow influenced by the setting. Speaking frankly,\nunder certain workloads we seem to have contention issues in the\ngeneral area of the buffer system. I'm thinking (guessing) that the\nproblems is usage_count is getting incremented faster than the buffers\nare getting cleared out which is then causing the sweeper to spend\nmore and more time examining hotly contended buffers. This may make\nno sense in the context of your issue; I haven't looked at the code\nyet. Also, I've been unable to cause this to happen in simulated\ntesting. But I'm suspicious (and dollars to doughnuts '0x347ba9' is\nspinlock related).\n\nAnyways, thanks for the report and (hopefully) the follow up.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 13 Sep 2013 09:52:16 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "On Fri, Sep 13, 2013 at 10:52 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Sep 12, 2013 at 3:06 PM, David Whittaker <[email protected]> wrote:\n> > Hi All,\n> >\n> > We lowered shared_buffers to 8G and increased effective_cache_size\n> > accordingly. So far, we haven't seen any issues since the adjustment.\n> The\n> > issues have come and gone in the past, so I'm not convinced it won't\n> crop up\n> > again, but I think the best course is to wait a week or so and see how\n> > things work out before we make any other changes.\n> >\n> > Thank you all for your help, and if the problem does reoccur, we'll look\n> > into the other options suggested, like using a patched postmaster and\n> > compiling for perf -g.\n> >\n> > Thanks again, I really appreciate the feedback from everyone.\n>\n> Interesting -- please respond with a follow up if/when you feel\n> satisfied the problem has gone away. Andres was right; I initially\n> mis-diagnosed the problem (there is another issue I'm chasing that has\n> a similar performance presentation but originates from a different\n> area of the code).\n>\n> That said, if reducing shared_buffers made *your* problem go away as\n> well, then this more evidence that we have an underlying contention\n> mechanic that is somehow influenced by the setting. Speaking frankly,\n> under certain workloads we seem to have contention issues in the\n> general area of the buffer system. I'm thinking (guessing) that the\n> problems is usage_count is getting incremented faster than the buffers\n> are getting cleared out which is then causing the sweeper to spend\n> more and more time examining hotly contended buffers. This may make\n> no sense in the context of your issue; I haven't looked at the code\n> yet. Also, I've been unable to cause this to happen in simulated\n> testing. But I'm suspicious (and dollars to doughnuts '0x347ba9' is\n> spinlock related).\n>\n> Anyways, thanks for the report and (hopefully) the follow up.\n>\n> merlin\n>\n\nYou guys have taken the time to help me through this, following up is the\nleast I can do. So far we're still looking good.\n\nOn Fri, Sep 13, 2013 at 10:52 AM, Merlin Moncure <[email protected]> wrote:\nOn Thu, Sep 12, 2013 at 3:06 PM, David Whittaker <[email protected]> wrote:\n\n\n\n> Hi All,\n>\n> We lowered shared_buffers to 8G and increased effective_cache_size\n> accordingly.  So far, we haven't seen any issues since the adjustment.  The\n> issues have come and gone in the past, so I'm not convinced it won't crop up\n> again, but I think the best course is to wait a week or so and see how\n> things work out before we make any other changes.\n>\n> Thank you all for your help, and if the problem does reoccur, we'll look\n> into the other options suggested, like using a patched postmaster and\n> compiling for perf -g.\n>\n> Thanks again, I really appreciate the feedback from everyone.\n\nInteresting -- please respond with a follow up if/when you feel\nsatisfied the problem has gone away.  Andres was right; I initially\nmis-diagnosed the problem (there is another issue I'm chasing that has\na similar performance presentation but originates from a different\narea of the code).\n\nThat said, if reducing shared_buffers made *your* problem go away as\nwell, then this more evidence that we have an underlying contention\nmechanic that is somehow influenced by the setting.  Speaking frankly,\nunder certain workloads we seem to have contention issues in the\ngeneral area of the buffer system.  I'm thinking (guessing) that the\nproblems is usage_count is getting incremented faster than the buffers\nare getting cleared out which is then causing the sweeper to spend\nmore and more time examining hotly contended buffers.  This may make\nno sense in the context of your issue; I haven't looked at the code\nyet.  Also, I've been unable to cause this to happen in simulated\ntesting.  But I'm suspicious (and dollars to doughnuts '0x347ba9' is\nspinlock related).\n\nAnyways, thanks for the report and (hopefully) the follow up.\n\nmerlinYou guys have taken the time to help me through this, following up is the least I can do.  So far we're still looking good.", "msg_date": "Fri, 13 Sep 2013 11:05:24 -0400", "msg_from": "David Whittaker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "We haven't seen any issues since we decreased shared_buffers. We also\ntuned some of the longer running / more frequently executed queries, so\nthat may have had an effect as well, but my money would be on the\nshared_buffers change. If the issue re-appears I'll try to get a perf\nagain and post back, but if you don't hear from me again you can assume the\nproblem is solved.\n\nThank you all again for the help.\n\n-Dave\n\nOn Fri, Sep 13, 2013 at 11:05 AM, David Whittaker <[email protected]> wrote:\n\n>\n>\n>\n> On Fri, Sep 13, 2013 at 10:52 AM, Merlin Moncure <[email protected]>wrote:\n>\n>> On Thu, Sep 12, 2013 at 3:06 PM, David Whittaker <[email protected]> wrote:\n>> > Hi All,\n>> >\n>> > We lowered shared_buffers to 8G and increased effective_cache_size\n>> > accordingly. So far, we haven't seen any issues since the adjustment.\n>> The\n>> > issues have come and gone in the past, so I'm not convinced it won't\n>> crop up\n>> > again, but I think the best course is to wait a week or so and see how\n>> > things work out before we make any other changes.\n>> >\n>> > Thank you all for your help, and if the problem does reoccur, we'll look\n>> > into the other options suggested, like using a patched postmaster and\n>> > compiling for perf -g.\n>> >\n>> > Thanks again, I really appreciate the feedback from everyone.\n>>\n>> Interesting -- please respond with a follow up if/when you feel\n>> satisfied the problem has gone away. Andres was right; I initially\n>> mis-diagnosed the problem (there is another issue I'm chasing that has\n>> a similar performance presentation but originates from a different\n>> area of the code).\n>>\n>> That said, if reducing shared_buffers made *your* problem go away as\n>> well, then this more evidence that we have an underlying contention\n>> mechanic that is somehow influenced by the setting. Speaking frankly,\n>> under certain workloads we seem to have contention issues in the\n>> general area of the buffer system. I'm thinking (guessing) that the\n>> problems is usage_count is getting incremented faster than the buffers\n>> are getting cleared out which is then causing the sweeper to spend\n>> more and more time examining hotly contended buffers. This may make\n>> no sense in the context of your issue; I haven't looked at the code\n>> yet. Also, I've been unable to cause this to happen in simulated\n>> testing. But I'm suspicious (and dollars to doughnuts '0x347ba9' is\n>> spinlock related).\n>>\n>> Anyways, thanks for the report and (hopefully) the follow up.\n>>\n>> merlin\n>>\n>\n> You guys have taken the time to help me through this, following up is the\n> least I can do. So far we're still looking good.\n>\n\nWe haven't seen any issues since we decreased shared_buffers.  We also tuned some of the longer running / more frequently executed queries, so that may have had an effect as well, but my money would be on the shared_buffers change.  If the issue re-appears I'll try to get a perf again and post back, but if you don't hear from me again you can assume the problem is solved.\nThank you all again for the help.-DaveOn Fri, Sep 13, 2013 at 11:05 AM, David Whittaker <[email protected]> wrote:\nOn Fri, Sep 13, 2013 at 10:52 AM, Merlin Moncure <[email protected]> wrote:\nOn Thu, Sep 12, 2013 at 3:06 PM, David Whittaker <[email protected]> wrote:\n\n\n\n\n> Hi All,\n>\n> We lowered shared_buffers to 8G and increased effective_cache_size\n> accordingly.  So far, we haven't seen any issues since the adjustment.  The\n> issues have come and gone in the past, so I'm not convinced it won't crop up\n> again, but I think the best course is to wait a week or so and see how\n> things work out before we make any other changes.\n>\n> Thank you all for your help, and if the problem does reoccur, we'll look\n> into the other options suggested, like using a patched postmaster and\n> compiling for perf -g.\n>\n> Thanks again, I really appreciate the feedback from everyone.\n\nInteresting -- please respond with a follow up if/when you feel\nsatisfied the problem has gone away.  Andres was right; I initially\nmis-diagnosed the problem (there is another issue I'm chasing that has\na similar performance presentation but originates from a different\narea of the code).\n\nThat said, if reducing shared_buffers made *your* problem go away as\nwell, then this more evidence that we have an underlying contention\nmechanic that is somehow influenced by the setting.  Speaking frankly,\nunder certain workloads we seem to have contention issues in the\ngeneral area of the buffer system.  I'm thinking (guessing) that the\nproblems is usage_count is getting incremented faster than the buffers\nare getting cleared out which is then causing the sweeper to spend\nmore and more time examining hotly contended buffers.  This may make\nno sense in the context of your issue; I haven't looked at the code\nyet.  Also, I've been unable to cause this to happen in simulated\ntesting.  But I'm suspicious (and dollars to doughnuts '0x347ba9' is\nspinlock related).\n\nAnyways, thanks for the report and (hopefully) the follow up.\n\nmerlinYou guys have taken the time to help me through this, following up is the least I can do.  So far we're still looking good.", "msg_date": "Fri, 20 Sep 2013 14:11:12 -0400", "msg_from": "David Whittaker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intermittent hangs with 9.2" } ]
[ { "msg_contents": "Hi Andrew,\n\n\nOn Tue, Sep 10, 2013 at 11:26 AM, Andrew Dunstan <[email protected]>wrote:\n\n>\n> On 09/10/2013 11:04 AM, David Whittaker wrote:\n>\n>>\n>> Hi All,\n>>\n>> I've been seeing a strange issue with our Postgres install for about a\n>> year now, and I was hoping someone might be able to help point me at the\n>> cause. At what seem like fairly random intervals Postgres will become\n>> unresponsive to the 3 application nodes it services. These periods tend to\n>> last for 10 - 15 minutes before everything rights itself and the system\n>> goes back to normal.\n>>\n>> During these periods the server will report a spike in the outbound\n>> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n>> context switches / interrupts (normal peaks are around 2k/8k respectively,\n>> and during these periods they‘ve gone to 15k/22k), and a load average of\n>> 100+. CPU usage stays relatively low, but it’s all system time reported,\n>> user time goes to zero. It doesn‘t seem to be disk related since we’re\n>> running with a shared_buffers setting of 24G, which will fit just about our\n>> entire database into memory, and the IO transactions reported by the\n>> server, as well as the disk reads reported by Postgres stay consistently\n>> low.\n>>\n>> We‘ve recently started tracking how long statements take to execute, and\n>> we’re seeing some really odd numbers. A simple delete by primary key, for\n>> example, from a table that contains about 280,000 rows, reportedly took\n>> 18h59m46.900s. An update by primary key in that same table was reported as\n>> 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n>> numbers don't seem reasonable at all.\n>>\n>> Some other changes we've made to postgresql.conf:\n>>\n>> synchronous_commit = off\n>>\n>> maintenance_work_mem = 1GB\n>> wal_level = hot_standby\n>> wal_buffers = 16MB\n>>\n>> max_wal_senders = 10\n>>\n>> wal_keep_segments = 5000\n>>\n>> checkpoint_segments = 128\n>>\n>> checkpoint_timeout = 30min\n>>\n>> checkpoint_completion_target = 0.9\n>>\n>> max_connections = 500\n>>\n>> The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n>> RAM, running Cent OS 6.3.\n>>\n>> So far we‘ve tried disabling Transparent Huge Pages after I found a\n>> number of resources online that indicated similar interrupt/context switch\n>> issues, but it hasn’t resolve the problem. I managed to catch it happening\n>> once and run a perf which showed:\n>>\n>> |\n>> + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n>> + 9.55% 10956 postmaster 0x2dc820 f set_config_option\n>> + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n>> + 5.75% 6609 postmaster 0x5a2b0 f\n>> ginHeapTupleFastCollect\n>> + 2.68% 3084 postmaster 0x192483 f\n>> build_implied_join_equality\n>> + 2.61% 2990 postmaster 0x187a55 f build_paths_for_OR\n>> + 1.86% 2131 postmaster 0x794aa f get_collation_oid\n>> + 1.56% 1822 postmaster 0x5a67e f\n>> ginHeapTupleFastInsert\n>> + 1.53% 1766 postmaster 0x1929bc f\n>> distribute_qual_to_rels\n>> + 1.33% 1558 postmaster 0x249671 f cmp_numerics|\n>>\n>> I‘m not sure what 0x347ba9 represents, or why it’s an address rather than\n>> a method name.\n>>\n>> That's about the sum of it. Any help would be greatly appreciated and if\n>> you want any more information about our setup, please feel free to ask.\n>>\n>>\n>>\n> I have seen cases like this with very high shared_buffers settings.\n>\n> 24Gb for shared_buffers is quite high, especially on a 48Gb box. What\n> happens if you dial that back to, say, 12Gb?\n>\n\nI'd be willing to give it a try. I'd really like to understand what's\ngoing on here though. Can you elaborate on that? Why would 24G of shared\nbuffers be too high in this case? The machine is devoted entirely to PG,\nso having PG use half of the available RAM to cache data doesn't feel\nunreasonable.\n\n\n>\n> cheers\n>\n> andrew\n>\n>\n\nHi Andrew,On Tue, Sep 10, 2013 at 11:26 AM, Andrew Dunstan <[email protected]> wrote:\n\nOn 09/10/2013 11:04 AM, David Whittaker wrote:\n\n\nHi All,\n\nI've been seeing a strange issue with our Postgres install for about a year now, and I was hoping someone might be able to help point me at the cause. At what seem like fairly random intervals Postgres will become unresponsive to the 3 application nodes it services. These periods tend to last for 10 - 15 minutes before everything rights itself and the system goes back to normal.\n\nDuring these periods the server will report a spike in the outbound bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in context switches / interrupts (normal peaks are around 2k/8k respectively, and during these periods they‘ve gone to 15k/22k), and a load average of 100+. CPU usage stays relatively low, but it’s all system time reported, user time goes to zero. It doesn‘t seem to be disk related since we’re running with a shared_buffers setting of 24G, which will fit just about our entire database into memory, and the IO transactions reported by the server, as well as the disk reads reported by Postgres stay consistently low.\n\nWe‘ve recently started tracking how long statements take to execute, and we’re seeing some really odd numbers. A simple delete by primary key, for example, from a table that contains about 280,000 rows, reportedly took 18h59m46.900s. An update by primary key in that same table was reported as 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those numbers don't seem reasonable at all.\n\nSome other changes we've made to postgresql.conf:\n\nsynchronous_commit = off\n\nmaintenance_work_mem = 1GB\nwal_level = hot_standby\nwal_buffers = 16MB\n\nmax_wal_senders = 10\n\nwal_keep_segments = 5000\n\ncheckpoint_segments = 128\n\ncheckpoint_timeout = 30min\n\ncheckpoint_completion_target = 0.9\n\nmax_connections = 500\n\nThe server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of RAM, running Cent OS 6.3.\n\nSo far we‘ve tried disabling Transparent Huge Pages after I found a number of resources online that indicated similar interrupt/context switch issues, but it hasn’t resolve the problem. I managed to catch it happening once and run a perf which showed:\n\n|\n+  41.40%       48154  postmaster  0x347ba9         f 0x347ba9\n+   9.55%       10956  postmaster  0x2dc820         f set_config_option\n+   8.64%        9946  postmaster  0x5a3d4          f writeListPage\n+   5.75%        6609  postmaster  0x5a2b0          f ginHeapTupleFastCollect\n+   2.68%        3084  postmaster  0x192483         f build_implied_join_equality\n+   2.61%        2990  postmaster  0x187a55         f build_paths_for_OR\n+   1.86%        2131  postmaster  0x794aa          f get_collation_oid\n+   1.56%        1822  postmaster  0x5a67e          f ginHeapTupleFastInsert\n+   1.53%        1766  postmaster  0x1929bc         f distribute_qual_to_rels\n+   1.33%        1558  postmaster  0x249671         f cmp_numerics|\n\nI‘m not sure what 0x347ba9 represents, or why it’s an address rather than a method name.\n\nThat's about the sum of it. Any help would be greatly appreciated and if you want any more information about our setup, please feel free to ask.\n\n\n\n\nI have seen cases like this with very high shared_buffers settings.\n\n24Gb for shared_buffers is quite high, especially on a 48Gb box. What happens if you dial that back to, say, 12Gb?I'd be willing to give it a try.  I'd really like to understand what's going on here though.  Can you elaborate on that?  Why would 24G of shared buffers be too high in this case?  The machine is devoted entirely to PG, so having PG use half of the available RAM to cache data doesn't feel unreasonable. \n \n\ncheers\n\nandrew", "msg_date": "Tue, 10 Sep 2013 14:04:57 -0400", "msg_from": "David Whittaker <[email protected]>", "msg_from_op": true, "msg_subject": "Intermittent hangs with 9.2" }, { "msg_contents": "On 10/09/13 20:04, David Whittaker wrote:\n> On Tue, Sep 10, 2013 at 11:26 AM, Andrew Dunstan <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> \n> On 09/10/2013 11:04 AM, David Whittaker wrote:\n> \n> \n> Hi All,\n> \n> I've been seeing a strange issue with our Postgres install for\n> about a year now, and I was hoping someone might be able to help\n> point me at the cause. At what seem like fairly random intervals\n> Postgres will become unresponsive to the 3 application nodes it\n> services. These periods tend to last for 10 - 15 minutes before\n> everything rights itself and the system goes back to normal.\n> \n> During these periods the server will report a spike in the\n> outbound bandwidth (from about 1mbs to about 5mbs most\n> recently), a huge spike in context switches / interrupts (normal\n> peaks are around 2k/8k respectively, and during these periods\n> they�ve gone to 15k/22k), and a load average of 100+. CPU usage\n> stays relatively low, but it�s all system time reported, user\n> time goes to zero. It doesn�t seem to be disk related since\n> we�re running with a shared_buffers setting of 24G, which will\n> fit just about our entire database into memory, and the IO\n> transactions reported by the server, as well as the disk reads\n> reported by Postgres stay consistently low.\n> \n> We�ve recently started tracking how long statements take to\n> execute, and we�re seeing some really odd numbers. A simple\n> delete by primary key, for example, from a table that contains\n> about 280,000 rows, reportedly took 18h59m46.900s. An update by\n> primary key in that same table was reported as 7d 17h 58m\n> 30.415s. That table is frequently accessed, but obviously those\n> numbers don't seem reasonable at all.\n> \n> Some other changes we've made to postgresql.conf:\n> \n> synchronous_commit = off\n> \n> maintenance_work_mem = 1GB\n> wal_level = hot_standby\n> wal_buffers = 16MB\n> \n> max_wal_senders = 10\n> \n> wal_keep_segments = 5000\n> \n> checkpoint_segments = 128\n> \n> checkpoint_timeout = 30min\n> \n> checkpoint_completion_target = 0.9\n> \n> max_connections = 500\n> \n> The server is a Dell Poweredge R900 with 4 Xeon E7430\n> processors, 48GB of RAM, running Cent OS 6.3.\n> \n> So far we�ve tried disabling Transparent Huge Pages after I\n> found a number of resources online that indicated similar\n> interrupt/context switch issues, but it hasn�t resolve the\n> problem. I managed to catch it happening once and run a perf\n> which showed:\n> \n> |\n> + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n> + 9.55% 10956 postmaster 0x2dc820 f\n> set_config_option\n> + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n> + 5.75% 6609 postmaster 0x5a2b0 f\n> ginHeapTupleFastCollect\n> + 2.68% 3084 postmaster 0x192483 f\n> build_implied_join_equality\n> + 2.61% 2990 postmaster 0x187a55 f\n> build_paths_for_OR\n> + 1.86% 2131 postmaster 0x794aa f\n> get_collation_oid\n> + 1.56% 1822 postmaster 0x5a67e f\n> ginHeapTupleFastInsert\n> + 1.53% 1766 postmaster 0x1929bc f\n> distribute_qual_to_rels\n> + 1.33% 1558 postmaster 0x249671 f cmp_numerics|\n> \n> I�m not sure what 0x347ba9 represents, or why it�s an address\n> rather than a method name.\n> \n> That's about the sum of it. Any help would be greatly\n> appreciated and if you want any more information about our\n> setup, please feel free to ask.\n> \n> \n> \n> I have seen cases like this with very high shared_buffers settings.\n> \n> 24Gb for shared_buffers is quite high, especially on a 48Gb box.\n> What happens if you dial that back to, say, 12Gb?\n> \n> \n> I'd be willing to give it a try. I'd really like to understand what's\n> going on here though. Can you elaborate on that? Why would 24G of\n> shared buffers be too high in this case? The machine is devoted\n> entirely to PG, so having PG use half of the available RAM to cache data\n> doesn't feel unreasonable.\n\nHere is what I have recently learned.\n\nThe root cause is crash safety and checkpoints. This is certainly\nsomething you want. When you write to the database these operations\nfirst occur in the buffer cache and the particular buffer you write to\nis marked dirty. The cache is organized in chunks of 8kb. Additionally\nwrite operations are also committed to the WAL.\n\nA checkpoint iterates over all dirty buffers writing them to the\ndatabase. After that all buffers are clean again.\n\nNow, if you write to a clean buffer it gets entirely written to the WAL.\nThat means after a checkpoint since every buffer is clean every write\ntriggers an 8kb write to the WAL. (Already dirty buffers are written\nonly partially)\n\nAnd the more shared buffers you have the more can be dirtied immediately\nafter a checkpoint, hence the spike.\n\nTo mitigate that lower shared_buffers to the 12GB Andrew mentioned or\neven lower (8GB) but also adjust effective_cache_size. This should\nreflect the free space you have when the database is NOT running. I\nexpect in your case that would be something between 40GB and 46GB.\n\nPlease correct me if I'm wrong!\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 09:26:31 +0200", "msg_from": "=?windows-1252?Q?Torsten_F=F6rtsch?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" }, { "msg_contents": "On Tue, Sep 10, 2013 at 02:04:57PM -0400, David Whittaker wrote:\n> Hi Andrew,\n> \n> \n> On Tue, Sep 10, 2013 at 11:26 AM, Andrew Dunstan <[email protected]>wrote:\n> \n> >\n> > On 09/10/2013 11:04 AM, David Whittaker wrote:\n> >\n> >>\n> >> Hi All,\n> >>\n> >> I've been seeing a strange issue with our Postgres install for about a\n> >> year now, and I was hoping someone might be able to help point me at the\n> >> cause. At what seem like fairly random intervals Postgres will become\n> >> unresponsive to the 3 application nodes it services. These periods tend to\n> >> last for 10 - 15 minutes before everything rights itself and the system\n> >> goes back to normal.\n> >>\n> >> During these periods the server will report a spike in the outbound\n> >> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n> >> context switches / interrupts (normal peaks are around 2k/8k respectively,\n> >> and during these periods they‘ve gone to 15k/22k), and a load average of\n> >> 100+. CPU usage stays relatively low, but it’s all system time reported,\n> >> user time goes to zero. It doesn‘t seem to be disk related since we’re\n> >> running with a shared_buffers setting of 24G, which will fit just about our\n> >> entire database into memory, and the IO transactions reported by the\n> >> server, as well as the disk reads reported by Postgres stay consistently\n> >> low.\n> >>\n> >> We‘ve recently started tracking how long statements take to execute, and\n> >> we’re seeing some really odd numbers. A simple delete by primary key, for\n> >> example, from a table that contains about 280,000 rows, reportedly took\n> >> 18h59m46.900s. An update by primary key in that same table was reported as\n> >> 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n> >> numbers don't seem reasonable at all.\n> >>\n> >> Some other changes we've made to postgresql.conf:\n> >>\n> >> synchronous_commit = off\n> >>\n> >> maintenance_work_mem = 1GB\n> >> wal_level = hot_standby\n> >> wal_buffers = 16MB\n> >>\n> >> max_wal_senders = 10\n> >>\n> >> wal_keep_segments = 5000\n> >>\n> >> checkpoint_segments = 128\n> >>\n> >> checkpoint_timeout = 30min\n> >>\n> >> checkpoint_completion_target = 0.9\n> >>\n> >> max_connections = 500\n> >>\n> >> The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n> >> RAM, running Cent OS 6.3.\n> >>\n> >> So far we‘ve tried disabling Transparent Huge Pages after I found a\n> >> number of resources online that indicated similar interrupt/context switch\n> >> issues, but it hasn’t resolve the problem. I managed to catch it happening\n> >> once and run a perf which showed:\n> >>\n> >> |\n> >> + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n> >> + 9.55% 10956 postmaster 0x2dc820 f set_config_option\n> >> + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n> >> + 5.75% 6609 postmaster 0x5a2b0 f\n> >> ginHeapTupleFastCollect\n> >> + 2.68% 3084 postmaster 0x192483 f\n> >> build_implied_join_equality\n> >> + 2.61% 2990 postmaster 0x187a55 f build_paths_for_OR\n> >> + 1.86% 2131 postmaster 0x794aa f get_collation_oid\n> >> + 1.56% 1822 postmaster 0x5a67e f\n> >> ginHeapTupleFastInsert\n> >> + 1.53% 1766 postmaster 0x1929bc f\n> >> distribute_qual_to_rels\n> >> + 1.33% 1558 postmaster 0x249671 f cmp_numerics|\n> >>\n> >> I‘m not sure what 0x347ba9 represents, or why it’s an address rather than\n> >> a method name.\n> >>\n> >> That's about the sum of it. Any help would be greatly appreciated and if\n> >> you want any more information about our setup, please feel free to ask.\n> >>\n> >>\n> >>\n> > I have seen cases like this with very high shared_buffers settings.\n> >\n> > 24Gb for shared_buffers is quite high, especially on a 48Gb box. What\n> > happens if you dial that back to, say, 12Gb?\n> >\n> \n> I'd be willing to give it a try. I'd really like to understand what's\n> going on here though. Can you elaborate on that? Why would 24G of shared\n> buffers be too high in this case? The machine is devoted entirely to PG,\n> so having PG use half of the available RAM to cache data doesn't feel\n> unreasonable.\n\nSome of the overhead of bgwriter and checkpoints is more or less linear\nin the size of shared_buffers. If your shared_buffers is large a lot of\ndata could be dirty when a checkpoint starts, resulting in an I/O spike\n... (although we've spread checkpoints in recent pg versions, so this\nshould be less a problem nowadays).\nAnother reason is that the OS cache is also being used for reads and\nwrites and with a large shared_buffers there is a risk of \"doubly cached\ndata\" (in the OS cache + in shared_buffers).\nIn an ideal world most frequently used blocks should be in\nshared_buffers and less frequently used block in the OS cache ..\n\n> \n> \n> >\n> > cheers\n> >\n> > andrew\n> >\n> >\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 10:36:42 +0200", "msg_from": "Julien Cigar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent hangs with 9.2" } ]
[ { "msg_contents": "Hi Ken,\n\n\nOn Tue, Sep 10, 2013 at 11:33 AM, [email protected] <[email protected]> wrote:\n\n> On Tue, Sep 10, 2013 at 11:04:21AM -0400, David Whittaker wrote:\n> > Hi All,\n> >\n> > I've been seeing a strange issue with our Postgres install for about a\n> year\n> > now, and I was hoping someone might be able to help point me at the\n> cause.\n> > At what seem like fairly random intervals Postgres will become\n> unresponsive\n> > to the 3 application nodes it services. These periods tend to last for\n> 10 -\n> > 15 minutes before everything rights itself and the system goes back to\n> > normal.\n> >\n> > During these periods the server will report a spike in the outbound\n> > bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n> > context switches / interrupts (normal peaks are around 2k/8k\n> respectively,\n> > and during these periods they‘ve gone to 15k/22k), and a load average of\n> > 100+. CPU usage stays relatively low, but it’s all system time reported,\n> > user time goes to zero. It doesn‘t seem to be disk related since we’re\n> > running with a shared_buffers setting of 24G, which will fit just about\n> our\n> > entire database into memory, and the IO transactions reported by the\n> > server, as well as the disk reads reported by Postgres stay consistently\n> > low.\n> >\n> > We‘ve recently started tracking how long statements take to execute, and\n> > we’re seeing some really odd numbers. A simple delete by primary key, for\n> > example, from a table that contains about 280,000 rows, reportedly took\n> > 18h59m46.900s. An update by primary key in that same table was reported\n> as\n> > 7d 17h 58m 30.415s. That table is frequently accessed, but obviously\n> those\n> > numbers don't seem reasonable at all.\n> >\n> > Some other changes we've made to postgresql.conf:\n> >\n> > synchronous_commit = off\n> >\n> > maintenance_work_mem = 1GB\n> > wal_level = hot_standby\n> > wal_buffers = 16MB\n> >\n> > max_wal_senders = 10\n> >\n> > wal_keep_segments = 5000\n> >\n> > checkpoint_segments = 128\n> >\n> > checkpoint_timeout = 30min\n> >\n> > checkpoint_completion_target = 0.9\n> >\n> > max_connections = 500\n> >\n> > The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n> > RAM, running Cent OS 6.3.\n> >\n> > So far we‘ve tried disabling Transparent Huge Pages after I found a\n> number\n> > of resources online that indicated similar interrupt/context switch\n> issues,\n> > but it hasn’t resolve the problem. I managed to catch it happening once\n> and\n> > run a perf which showed:\n> >\n> >\n> > + 41.40% 48154 postmaster 0x347ba9 f 0x347ba9\n> > + 9.55% 10956 postmaster 0x2dc820 f\n> > set_config_option\n> > + 8.64% 9946 postmaster 0x5a3d4 f writeListPage\n> > + 5.75% 6609 postmaster 0x5a2b0 f\n> > ginHeapTupleFastCollect\n> > + 2.68% 3084 postmaster 0x192483 f\n> > build_implied_join_equality\n> > + 2.61% 2990 postmaster 0x187a55 f\n> > build_paths_for_OR\n> > + 1.86% 2131 postmaster 0x794aa f\n> > get_collation_oid\n> > + 1.56% 1822 postmaster 0x5a67e f\n> > ginHeapTupleFastInsert\n> > + 1.53% 1766 postmaster 0x1929bc f\n> > distribute_qual_to_rels\n> > + 1.33% 1558 postmaster 0x249671 f cmp_numerics\n> >\n> > I‘m not sure what 0x347ba9 represents, or why it’s an address rather\n> than a\n> > method name.\n> >\n> > That's about the sum of it. Any help would be greatly appreciated and if\n> > you want any more information about our setup, please feel free to ask.\n> >\n> > Thanks,\n> > Dave\n>\n> Hi Dave,\n>\n> A load average of 100+ means that you have that many processes waiting to\n> run yet you only have 16 cpus. You really need to consider using a\n> connection\n> pooler like pgbouncer to keep your connection count in the 16-32 range.\n>\n>\nThat would make sense if the issues corresponded to increased load, but\nthey don't. I understand that the load spike is caused by waiting\nprocessed, but it doesn't seem to correspond to a transaction spike. The\nnumber of transactions per second appear to stay in-line with normal usage\nwhen these issues occur. I do see an increase in postmaster processes when\nit happens, but they don't seem to have entered a transaction yet. Coupled\nwith the fact that cpu usage is all system time, and the context switch /\ninterrupt spikes, I feel like something must be going on behind the scenes\nleading to these problems. I'm just not sure what that something is.\n\n\n> Regards,\n> Ken\n>\n\nHi Ken,On Tue, Sep 10, 2013 at 11:33 AM, [email protected] <[email protected]> wrote:\nOn Tue, Sep 10, 2013 at 11:04:21AM -0400, David Whittaker wrote:\n> Hi All,\n>\n> I've been seeing a strange issue with our Postgres install for about a year\n> now, and I was hoping someone might be able to help point me at the cause.\n> At what seem like fairly random intervals Postgres will become unresponsive\n> to the 3 application nodes it services. These periods tend to last for 10 -\n> 15 minutes before everything rights itself and the system goes back to\n> normal.\n>\n> During these periods the server will report a spike in the outbound\n> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n> context switches / interrupts (normal peaks are around 2k/8k respectively,\n> and during these periods they‘ve gone to 15k/22k), and a load average of\n> 100+. CPU usage stays relatively low, but it’s all system time reported,\n> user time goes to zero. It doesn‘t seem to be disk related since we’re\n> running with a shared_buffers setting of 24G, which will fit just about our\n> entire database into memory, and the IO transactions reported by the\n> server, as well as the disk reads reported by Postgres stay consistently\n> low.\n>\n> We‘ve recently started tracking how long statements take to execute, and\n> we’re seeing some really odd numbers. A simple delete by primary key, for\n> example, from a table that contains about 280,000 rows, reportedly took\n> 18h59m46.900s. An update by primary key in that same table was reported as\n> 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n> numbers don't seem reasonable at all.\n>\n> Some other changes we've made to postgresql.conf:\n>\n> synchronous_commit = off\n>\n> maintenance_work_mem = 1GB\n> wal_level = hot_standby\n> wal_buffers = 16MB\n>\n> max_wal_senders = 10\n>\n> wal_keep_segments = 5000\n>\n> checkpoint_segments = 128\n>\n> checkpoint_timeout = 30min\n>\n> checkpoint_completion_target = 0.9\n>\n> max_connections = 500\n>\n> The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of\n> RAM, running Cent OS 6.3.\n>\n> So far we‘ve tried disabling Transparent Huge Pages after I found a number\n> of resources online that indicated similar interrupt/context switch issues,\n> but it hasn’t resolve the problem. I managed to catch it happening once and\n> run a perf which showed:\n>\n>\n> +  41.40%       48154  postmaster  0x347ba9         f 0x347ba9\n> +   9.55%       10956  postmaster  0x2dc820         f\n> set_config_option\n> +   8.64%        9946  postmaster  0x5a3d4          f writeListPage\n> +   5.75%        6609  postmaster  0x5a2b0          f\n> ginHeapTupleFastCollect\n> +   2.68%        3084  postmaster  0x192483         f\n> build_implied_join_equality\n> +   2.61%        2990  postmaster  0x187a55         f\n> build_paths_for_OR\n> +   1.86%        2131  postmaster  0x794aa          f\n> get_collation_oid\n> +   1.56%        1822  postmaster  0x5a67e          f\n> ginHeapTupleFastInsert\n> +   1.53%        1766  postmaster  0x1929bc         f\n> distribute_qual_to_rels\n> +   1.33%        1558  postmaster  0x249671         f cmp_numerics\n>\n> I‘m not sure what 0x347ba9 represents, or why it’s an address rather than a\n> method name.\n>\n> That's about the sum of it. Any help would be greatly appreciated and if\n> you want any more information about our setup, please feel free to ask.\n>\n> Thanks,\n> Dave\n\nHi Dave,\n\nA load average of 100+ means that you have that many processes waiting to\nrun yet you only have 16 cpus. You really need to consider using a connection\npooler like pgbouncer to keep your connection count in the 16-32 range.\nThat would make sense if the issues corresponded to increased load, but they don't. I understand that the load spike is caused by waiting processed, but it doesn't seem to correspond to a transaction spike.  The number of transactions per second appear to stay in-line with normal usage when these issues occur.  I do see an increase in postmaster processes when it happens, but they don't seem to have entered a transaction yet.  Coupled with the fact that cpu usage is all system time, and the context switch / interrupt spikes, I feel like something must be going on behind the scenes leading to these problems.  I'm just not sure what that something is.\n \nRegards,\nKen", "msg_date": "Tue, 10 Sep 2013 14:05:41 -0400", "msg_from": "David Whittaker <[email protected]>", "msg_from_op": true, "msg_subject": "Intermittent hangs with 9.2" } ]
[ { "msg_contents": "Hi Jeff\n\nOn Tue, Sep 10, 2013 at 1:44 PM, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Sep 10, 2013 at 8:04 AM, David Whittaker <[email protected]> wrote:\n>\n>> Hi All,\n>>\n>> I've been seeing a strange issue with our Postgres install for about a\n>> year now, and I was hoping someone might be able to help point me at the\n>> cause. At what seem like fairly random intervals Postgres will become\n>> unresponsive to the 3 application nodes it services. These periods tend to\n>> last for 10 - 15 minutes before everything rights itself and the system\n>> goes back to normal.\n>>\n>> During these periods the server will report a spike in the outbound\n>> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in\n>> context switches / interrupts (normal peaks are around 2k/8k respectively,\n>> and during these periods they‘ve gone to 15k/22k), and a load average of\n>> 100+.\n>>\n>\n> I'm curious about the spike it outbound network usage. If the database is\n> hung and no longer responding to queries, what is it getting sent over the\n> network? Can you snoop on that traffic?\n>\n\n\nIt seems curious to me as well. I don't know if one, or a few of the pg\nconnections are streaming out data and somehow blocking the others in the\nprocess, or the data could be unrelated to pg. If you can suggest a tool I\ncould use to monitor the data transfer continuously and get some type of\nsummary of what happened after the issue reoccurs, I'd appreciate it.\n Otherwise, I'll try to get in and catch some specifics the next time it\nhappens.\n\n\n>\n>\n>> CPU usage stays relatively low, but it’s all system time reported, user\n>> time goes to zero. It doesn‘t seem to be disk related since we’re running\n>> with a shared_buffers setting of 24G, which will fit just about our entire\n>> database into memory, and the IO transactions reported by the server, as\n>> well as the disk reads reported by Postgres stay consistently low.\n>>\n> There have been reports that using very large shared_buffers can cause a\n> lot of contention issues in the kernel, for some kernels. The usual advice\n> is not to set shared_buffers above 8GB. The operating system can use the\n> rest of the memory to cache for you.\n>\n> Also, using a connection pooler and lowering the number of connections to\n> the real database has solved problems like this before.\n>\n\nWe're going to implement both of these changes tonight. I was going to go\nwith 12G for shared_buffers based on Andrew's suggestion, but maybe I'll go\ndown to 8 if that seems to be the magic number. We're also going to\ndecrease the max connections from 500 to 100 and decrease the pooled\nconnections per server.\n\n\n>\n>\n>> We‘ve recently started tracking how long statements take to execute,\n>> and we’re seeing some really odd numbers. A simple delete by primary key,\n>> for example, from a table that contains about 280,000 rows, reportedly took\n>> 18h59m46.900s. An update by primary key in that same table was reported as\n>> 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those\n>> numbers don't seem reasonable at all.\n>>\n> How are your tracking those? Is it log_min_duration_statement or\n> something else?\n>\n\nWe're using log_min_duration_statement = 1000, sending the log messages to\nsyslog, then analyzing with pg_badger.\n\n\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi JeffOn Tue, Sep 10, 2013 at 1:44 PM, Jeff Janes <[email protected]> wrote:\nOn Tue, Sep 10, 2013 at 8:04 AM, David Whittaker <[email protected]> wrote:\n\n\n\n\n\nHi All,\nI've been seeing a strange issue with our Postgres install for about a year now, and I was hoping someone might be able to help point me at the cause. At what seem like fairly random intervals Postgres will become unresponsive to the 3 application nodes it services. These periods tend to last for 10 - 15 minutes before everything rights itself and the system goes back to normal. \nDuring these periods the server will report a spike in the outbound bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in context switches / interrupts (normal peaks are around 2k/8k respectively, and during these periods they‘ve gone to 15k/22k), and a load average of 100+.\nI'm curious about the spike it outbound network usage.  If the database is hung and no longer responding to queries, what is it getting sent over the network?  Can you snoop on that traffic?\nIt seems curious to me as well.  I don't know if one, or a few of the pg connections are streaming out data and somehow blocking the others in the process, or the data could be unrelated to pg.  If you can suggest a tool I could use to monitor the data transfer continuously and get some type of summary of what happened after the issue reoccurs, I'd appreciate it.  Otherwise, I'll try to get in and catch some specifics the next time it happens.\n\n \n \n\n\n\n CPU usage stays relatively low, but it’s all system time reported, user time goes to zero. It doesn‘t seem to be disk related since we’re running with a shared_buffers setting of 24G, which will fit just about our entire database into memory, and the IO transactions reported by the server, as well as the disk reads reported by Postgres stay consistently low.\nThere have been reports that using very large shared_buffers can cause a lot of contention issues in the kernel, for some kernels. The usual advice is not to set shared_buffers above 8GB.  The operating system can use the rest of the memory to cache for you.\nAlso, using a connection pooler and lowering the number of connections to the real database has solved problems like this before.We're going to implement both of these changes tonight.  I was going to go with 12G for shared_buffers based on Andrew's suggestion, but maybe I'll go down to 8 if that seems to be the magic number.  We're also going to decrease the max connections from 500 to 100 and decrease the pooled connections per server.\n\n  \n\nWe‘ve recently started tracking how long statements take to execute, and we’re seeing some really odd numbers. A simple delete by primary key, for example, from a table that contains about 280,000 rows, reportedly took 18h59m46.900s. An update by primary key in that same table was reported as 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those numbers don't seem reasonable at all.\nHow are your tracking those?  Is it log_min_duration_statement or something else?We're using log_min_duration_statement = 1000, sending the log messages to syslog, then analyzing with pg_badger.\n Cheers,\nJeff", "msg_date": "Tue, 10 Sep 2013 14:06:26 -0400", "msg_from": "David Whittaker <[email protected]>", "msg_from_op": true, "msg_subject": "Intermittent hangs with 9.2" } ]
[ { "msg_contents": "Hi there,\n\nhere is another one from the \"why is my query so slow?\" category. First post, so please bare with me.\n\nThe query (which takes around 6 seconds) is this:\n\nSET work_mem TO '256MB';//else sort spills to disk\n\nSELECT\n\tet.subject,\n\tCOALESCE (createperson.vorname || ' ', '') || createperson.nachname AS \"Sender/Empfänger\",\n\tto_char(es.sentonat, 'DD.MM.YY') AS \"versendet am\",\n\tes.sentonat AS orderbydate,\n\tCOUNT (ct.*),\n\tCOALESCE (C . NAME, 'keine Angabe') :: TEXT AS \"für Kunde\",\n\tCOUNT (ct.datetimesentonat) :: TEXT || ' von ' || COUNT (ct.*) :: TEXT || ' versendet',\n\t1 AS LEVEL,\n\tTRUE AS hassubs,\n\tFALSE AS opensubs,\n\t'emailsendings:' || es. ID :: TEXT AS model_id,\n\tNULL :: TEXT AS parent_model_id,\n\tes. ID\nFROM\n\temailtemplates et\nJOIN emailsendings es ON et. ID = es.emailtemplate_id\nLEFT JOIN companies C ON C . ID = es.customers_id\nLEFT JOIN personen createperson ON createperson. ID = et.personen_create_id\nLEFT JOIN contacts ct ON ct.emailsendings_id = es. ID WHERE f_record_visible_to_currentuser(et.*::coretable) = true \nGROUP BY\n\t1,\n\t2,\n\t3,\n\t4,\n\t6,\n\t8,\n\t9,\n\t10,\n\t11,\n\t12,\n\t13\nORDER BY\n\tes.sentonat desc\n\nExplain analyze:\n\nGroupAggregate (cost=35202.88..45530.77 rows=118033 width=142) (actual time=5119.783..5810.680 rows=898 loops=1)\n -> Sort (cost=35202.88..35497.96 rows=118033 width=142) (actual time=5119.356..5200.457 rows=352744 loops=1)\n Sort Key: es.sentonat, et.subject, ((COALESCE((createperson.vorname || ' '::text), ''::text) || createperson.nachname)), (to_char(es.sentonat, 'DD.MM.YY'::text)), ((COALESCE(c.name, 'keine Angabe'::character varying))::text), (1), (true), (false), (('emailsendings:'::text || (es.id)::text)), (NULL::text), es.id\n Sort Method: quicksort Memory: 198999kB\n -> Nested Loop Left Join (cost=0.00..25259.29 rows=118033 width=142) (actual time=1.146..1896.382 rows=352744 loops=1)\n -> Nested Loop Left Join (cost=0.00..2783.16 rows=302 width=102) (actual time=1.127..32.577 rows=898 loops=1)\n -> Merge Join (cost=0.00..2120.06 rows=302 width=86) (actual time=1.125..30.940 rows=898 loops=1)\n Merge Cond: (et.id = es.emailtemplate_id)\n -> Nested Loop Left Join (cost=0.00..2224.95 rows=277 width=74) (actual time=1.109..27.484 rows=830 loops=1)\n -> Index Scan using emailtemplates_pkey on emailtemplates et (cost=0.00..460.71 rows=277 width=63) (actual time=1.097..20.541 rows=830 loops=1)\n Filter: f_record_visible_to_currentuser((et.*)::coretable)\n -> Index Scan using personen_pkey on personen createperson (cost=0.00..6.36 rows=1 width=19) (actual time=0.006..0.006 rows=1 loops=830)\n Index Cond: (createperson.id = et.personen_create_id)\n -> Index Scan using fki_emailsendings_emailtemplate_id_fkey on emailsendings es (cost=0.00..49.83 rows=905 width=20) (actual time=0.011..1.360 rows=898 loops=1)\n -> Index Scan using firmen_pkey on companies c (cost=0.00..2.18 rows=1 width=24) (actual time=0.001..0.001 rows=0 loops=898)\n Index Cond: (c.id = es.customers_id)\n -> Index Scan using fki_contacts_emailsendings_id_fkey on contacts ct (cost=0.00..61.55 rows=561 width=44) (actual time=0.019..0.738 rows=393 loops=898)\n Index Cond: (ct.emailsendings_id = es.id)\nTotal runtime: 5865.886 ms\n\nI do have an index on es.sentonat. The sentonat-values are all unique, so I don't think I need indexes on all the fields I sort by. But then again, my understanding of this might be entirely wrong.\n\nDepeszs' explain (http://explain.depesz.com/s/69O) tells me this:\n\nnode type\tcount\tsum of times\t% of query\nGroupAggregate\t1\t610.223 ms\t10.5 %\nIndex Scan\t5\t690.503 ms\t11.9 %\nMerge Join\t1\t2.096 ms\t0.0 %\nNested Loop Left Join\t3\t1203.783 ms\t20.7 %\nSort\t1\t3304.075 ms\t56.9 %\n\n, so the sort appears to be the problem. Any pointers would be highly appreciated.\n\nMaximilian Tyrtania\nhttp://www.contactking.de\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 12:36:42 +0200", "msg_from": "Maximilian Tyrtania <[email protected]>", "msg_from_op": true, "msg_subject": "slow sort" }, { "msg_contents": "On Wed, Sep 11, 2013 at 3:36 AM, Maximilian Tyrtania\n<[email protected]>wrote:\n\n>\n> JOIN emailsendings es ON et. ID = es.emailtemplate_id\n>\nORDER BY\n> es.sentonat desc\n>\n\n\nPerhaps on an index on (es.emailtemplate_id, es.sentonat desc) would help?\n\nOn Wed, Sep 11, 2013 at 3:36 AM, Maximilian Tyrtania <[email protected]> wrote:\n\n\nJOIN emailsendings es ON et. ID = es.emailtemplate_id \nORDER BY\n        es.sentonat descPerhaps on an index on (es.emailtemplate_id, es.sentonat desc) would help?", "msg_date": "Wed, 11 Sep 2013 06:58:21 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow sort" }, { "msg_contents": "Thanks, unfortunately it (creating that index) didn't. But I rewrote my query using inline subqueries, which already helped a lot.\n\nThanks again,\n\nMaximilian Tyrtania\nhttp://www.contactking.de\n\nAm 11.09.2013 um 15:58 schrieb bricklen <[email protected]>:\n\n> \n> On Wed, Sep 11, 2013 at 3:36 AM, Maximilian Tyrtania <[email protected]> wrote:\n> \n> JOIN emailsendings es ON et. ID = es.emailtemplate_id \n> ORDER BY\n> es.sentonat desc\n> \n> \n> Perhaps on an index on (es.emailtemplate_id, es.sentonat desc) would help?\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 17:24:48 +0200", "msg_from": "Maximilian Tyrtania <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow sort" }, { "msg_contents": "\nOn 09/11/2013 06:36 AM, Maximilian Tyrtania wrote:\n> Hi there,\n>\n> here is another one from the \"why is my query so slow?\" category. First post, so please bare with me.\n>\n> The query (which takes around 6 seconds) is this:\n>\n> SET work_mem TO '256MB';//else sort spills to disk\n>\n> SELECT\n> \tet.subject,\n> \tCOALESCE (createperson.vorname || ' ', '') || createperson.nachname AS \"Sender/Empfänger\",\n> \tto_char(es.sentonat, 'DD.MM.YY') AS \"versendet am\",\n> \tes.sentonat AS orderbydate,\n> \tCOUNT (ct.*),\n> \tCOALESCE (C . NAME, 'keine Angabe') :: TEXT AS \"für Kunde\",\n> \tCOUNT (ct.datetimesentonat) :: TEXT || ' von ' || COUNT (ct.*) :: TEXT || ' versendet',\n> \t1 AS LEVEL,\n> \tTRUE AS hassubs,\n> \tFALSE AS opensubs,\n> \t'emailsendings:' || es. ID :: TEXT AS model_id,\n> \tNULL :: TEXT AS parent_model_id,\n> \tes. ID\n> FROM\n> \temailtemplates et\n> JOIN emailsendings es ON et. ID = es.emailtemplate_id\n> LEFT JOIN companies C ON C . ID = es.customers_id\n> LEFT JOIN personen createperson ON createperson. ID = et.personen_create_id\n> LEFT JOIN contacts ct ON ct.emailsendings_id = es. ID WHERE f_record_visible_to_currentuser(et.*::coretable) = true\n> GROUP BY\n> \t1,\n> \t2,\n> \t3,\n> \t4,\n> \t6,\n> \t8,\n> \t9,\n> \t10,\n> \t11,\n> \t12,\n> \t13\n> ORDER BY\n> \tes.sentonat desc\n>\n> Explain analyze:\n>\n> GroupAggregate (cost=35202.88..45530.77 rows=118033 width=142) (actual time=5119.783..5810.680 rows=898 loops=1)\n> -> Sort (cost=35202.88..35497.96 rows=118033 width=142) (actual time=5119.356..5200.457 rows=352744 loops=1)\n> Sort Key: es.sentonat, et.subject, ((COALESCE((createperson.vorname || ' '::text), ''::text) || createperson.nachname)), (to_char(es.sentonat, 'DD.MM.YY'::text)), ((COALESCE(c.name, 'keine Angabe'::character varying))::text), (1), (true), (false), (('emailsendings:'::text || (es.id)::text)), (NULL::text), es.id\n> Sort Method: quicksort Memory: 198999kB\n> -> Nested Loop Left Join (cost=0.00..25259.29 rows=118033 width=142) (actual time=1.146..1896.382 rows=352744 loops=1)\n> -> Nested Loop Left Join (cost=0.00..2783.16 rows=302 width=102) (actual time=1.127..32.577 rows=898 loops=1)\n> -> Merge Join (cost=0.00..2120.06 rows=302 width=86) (actual time=1.125..30.940 rows=898 loops=1)\n> Merge Cond: (et.id = es.emailtemplate_id)\n> -> Nested Loop Left Join (cost=0.00..2224.95 rows=277 width=74) (actual time=1.109..27.484 rows=830 loops=1)\n> -> Index Scan using emailtemplates_pkey on emailtemplates et (cost=0.00..460.71 rows=277 width=63) (actual time=1.097..20.541 rows=830 loops=1)\n> Filter: f_record_visible_to_currentuser((et.*)::coretable)\n> -> Index Scan using personen_pkey on personen createperson (cost=0.00..6.36 rows=1 width=19) (actual time=0.006..0.006 rows=1 loops=830)\n> Index Cond: (createperson.id = et.personen_create_id)\n> -> Index Scan using fki_emailsendings_emailtemplate_id_fkey on emailsendings es (cost=0.00..49.83 rows=905 width=20) (actual time=0.011..1.360 rows=898 loops=1)\n> -> Index Scan using firmen_pkey on companies c (cost=0.00..2.18 rows=1 width=24) (actual time=0.001..0.001 rows=0 loops=898)\n> Index Cond: (c.id = es.customers_id)\n> -> Index Scan using fki_contacts_emailsendings_id_fkey on contacts ct (cost=0.00..61.55 rows=561 width=44) (actual time=0.019..0.738 rows=393 loops=898)\n> Index Cond: (ct.emailsendings_id = es.id)\n> Total runtime: 5865.886 ms\n>\n> I do have an index on es.sentonat. The sentonat-values are all unique, so I don't think I need indexes on all the fields I sort by. But then again, my understanding of this might be entirely wrong.\n>\n> Depeszs' explain (http://explain.depesz.com/s/69O) tells me this:\n>\n> node type\tcount\tsum of times\t% of query\n> GroupAggregate\t1\t610.223 ms\t10.5 %\n> Index Scan\t5\t690.503 ms\t11.9 %\n> Merge Join\t1\t2.096 ms\t0.0 %\n> Nested Loop Left Join\t3\t1203.783 ms\t20.7 %\n> Sort\t1\t3304.075 ms\t56.9 %\n>\n> , so the sort appears to be the problem. Any pointers would be highly appreciated.\n>\n\nI recently had to diagnose and remedy a case such as this.\n\nThe short answer is to rewrite your query so you don't have to group by \nso many things. Collect your aggregates in a common table expression \nquery (or possibly more than one, depends what you need) using the \nminimum non-aggregated columns to enable you to get correct results and \nthen later decorate that with all the extra things you need such as \nconstant columns and columns that are irrelevant to the aggregation.\n\nThis gets hard when queries are very complex, and harder still when the \nquery is written by a query generator. But a good generator should not \njust say \"grouo by everything that's not aggregated\" and think it's \ndoing a good job. In your case it should be relatively straightforward.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 11:31:47 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow sort" }, { "msg_contents": "Am 11.09.2013 um 17:31 schrieb Andrew Dunstan <[email protected]>:\n\n> I recently had to diagnose and remedy a case such as this.\n> \n> The short answer is to rewrite your query so you don't have to group by so many things. Collect your aggregates in a common table expression query (or possibly more than one, depends what you need) using the minimum non-aggregated columns to enable you to get correct results and then later decorate that with all the extra things you need such as constant columns and columns that are irrelevant to the aggregation.\n> \n> This gets hard when queries are very complex, and harder still when the query is written by a query generator. But a good generator should not just say \"grouo by everything that's not aggregated\" and think it's doing a good job. In your case it should be relatively straightforward.\n> \n> cheers\n> \n> andrew\n\nAh, yes, only now do I see that the query screams for a CTE. Thanks for the eye opener.\n \nMaximilian Tyrtania\nhttp://www.contactking.de\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Sep 2013 10:00:32 +0200", "msg_from": "Maximilian Tyrtania <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow sort" } ]
[ { "msg_contents": "Hi all,\n\nI have a number of Postgres 9.2.4 databases with the same schema but with\nslightly different contents, running on small servers that are basically\nalike (8-16 GB ram).\n\nWhen I run the same query on these databases it results in one of two\ndifferent execution plans where one is much faster (appx. 50 times) than\nthe other. Each database always gives the same plan, and vacuuming,\nupdating statistics and reindexing doesn't seem to make any difference.\n\nClearly the fast plan is preferred, but I haven't been able to identify\nany pattern (table sizes, tuning etc.) in why one plan is chosen over the\nother, so is there any way I can make Postgres tell me why it chooses to\nplan the way it does?\n\nAs reference I have included the query and execution plans for two\ndifferent databases below. The x and e tables contain about 5 and 10\nmillion records respectively, and the performance difference is perfectly\nreasonable as the outer loop has to process 3576 rows in the fast case and\n154149 rows in the slow case.\n\nBest regards & thanks,\n Mikkel Lauritsen\n\n---\n\nQuery:\n\nSELECT x.r, e.id, a.id\nFROM x\n INNER JOIN e ON x.id = e.id\n INNER JOIN a ON x.a_id = a.id\n INNER JOIN i ON a.i_id = i.id\nWHERE e.h_id = 'foo' AND i.c = 'bar';\n\nFast plan:\n\n Nested Loop (cost=0.00..24553.77 rows=1 width=86) (actual\ntime=2.810..102.451 rows=20 loops=1)\n Join Filter: (x.a_id = a.id)\n Rows Removed by Join Filter: 3556\n -> Nested Loop (cost=0.00..16.55 rows=1 width=39) (actual\ntime=0.036..0.046 rows=3 loops=1)\n -> Index Scan using i_c_idx on i (cost=0.00..8.27 rows=1\nwidth=39) (actual time=0.019..0.020 rows=1 loops=1)\n Index Cond: (c = 'bar'::text)\n -> Index Scan using a_i_id_idx on a (cost=0.00..8.27 rows=1\nwidth=78) (actual time=0.014..0.021 rows=3 loops=1)\n Index Cond: (i_id = i.id)\n -> Nested Loop (cost=0.00..24523.00 rows=1138 width=86) (actual\ntime=2.641..33.818 rows=1192 loops=3)\n -> Index Scan using e_h_id_idx on e (cost=0.00..6171.55\nrows=1525 width=39) (actual time=0.049..1.108 rows=1857 loops=3)\n Index Cond: (h_id = 'foo'::text)\n -> Index Scan using x_id_idx on x (cost=0.00..12.02 rows=1\nwidth=86) (actual time=0.017..0.017 rows=1 loops=5571)\n Index Cond: (id = e.id)\n Total runtime: 102.526 ms\n\nSlow plan:\n\n Nested Loop (cost=0.00..858.88 rows=1 width=86) (actual\ntime=89.430..2589.905 rows=11 loops=1)\n -> Nested Loop (cost=0.00..448.38 rows=169 width=86) (actual\ntime=0.135..142.246 rows=154149 loops=1)\n -> Nested Loop (cost=0.00..16.55 rows=1 width=39) (actual\ntime=0.056..0.064 rows=3 loops=1)\n -> Index Scan using i_c_idx on i (cost=0.00..8.27 rows=1\nwidth=39) (actual time=0.030..0.030 rows=1 loops=1)\n Index Cond: (c = 'bar'::text)\n -> Index Scan using a_i_id_idx on a (cost=0.00..8.27\nrows=1 width=78) (actual time=0.022..0.028 rows=3 loops=1)\n Index Cond: (i_id = i.id)\n -> Index Scan using x_a_id_idx on x (cost=0.00..372.48\nrows=5935 width=86) (actual time=0.065..35.479 rows=51383 loops=3)\n Index Cond: (a_id = a.id)\n -> Index Scan using e_pkey on e (cost=0.00..2.42 rows=1 width=39)\n(actual time=0.015..0.015 rows=0 loops=154149)\n Index Cond: (id = x.id)\n Filter: (h_id = 'foo'::text)\n Rows Removed by Filter: 1\n Total runtime: 2589.970 ms\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 13:16:14 +0200", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Reasons for choosing one execution plan over =?UTF-8?Q?another=3F?=" }, { "msg_contents": "On Wed, Sep 11, 2013 at 4:16 AM, Mikkel Lauritsen <[email protected]> wrote:\n\n> Hi all,\n>\n> I have a number of Postgres 9.2.4 databases with the same schema but with\n> slightly different contents, running on small servers that are basically\n> alike (8-16 GB ram).\n>\n> When I run the same query on these databases it results in one of two\n> different execution plans where one is much faster (appx. 50 times) than\n> the other. Each database always gives the same plan, and vacuuming,\n> updating statistics and reindexing doesn't seem to make any difference.\n>\n> Clearly the fast plan is preferred, but I haven't been able to identify\n> any pattern (table sizes, tuning etc.) in why one plan is chosen over the\n> other, so is there any way I can make Postgres tell me why it chooses to\n> plan the way it does?\n>\n\nAre you sure the schemas are identical, including the existence of\nidentical indexes?\n\nAlso, using \"explain (analyze, buffers)\" gives more info than just \"explain\nanalyze\"\n\nIf you can get both systems to use the same plan, then you can compare the\ncost estimates of each directly. But that is easier said than done.\n\nYou can temporarily drop an index used in the slow query but not the fast\none, to see what plan that comes up with:\n\nbegin; drop index x_a_id_idx; <run query>; rollback;\n\nCheers,\n\nJeff\n\nOn Wed, Sep 11, 2013 at 4:16 AM, Mikkel Lauritsen <[email protected]> wrote:\nHi all,\n\nI have a number of Postgres 9.2.4 databases with the same schema but with\nslightly different contents, running on small servers that are basically\nalike (8-16 GB ram).\n\nWhen I run the same query on these databases it results in one of two\ndifferent execution plans where one is much faster (appx. 50 times) than\nthe other. Each database always gives the same plan, and vacuuming,\nupdating statistics and reindexing doesn't seem to make any difference.\n\nClearly the fast plan is preferred, but I haven't been able to identify\nany pattern (table sizes, tuning etc.) in why one plan is chosen over the\nother, so is there any way I can make Postgres tell me why it chooses to\nplan the way it does?Are you sure the schemas are identical, including the existence of identical indexes?Also, using \"explain (analyze, buffers)\" gives more info than just \"explain analyze\"\nIf you can get both systems to use the same plan, then you can compare the cost estimates of each directly. But that is easier said than done.You can temporarily drop an index used in the slow query but not the fast one, to see what plan that comes up with:\nbegin; drop index x_a_id_idx; <run query>; rollback;Cheers,Jeff", "msg_date": "Wed, 11 Sep 2013 09:20:40 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reasons for choosing one execution plan over another?" }, { "msg_contents": "Il 11/09/2013 13:16, Mikkel Lauritsen ha scritto:\n> Hi all,\n>\n> I have a number of Postgres 9.2.4 databases with the same schema but with\n> slightly different contents, running on small servers that are basically\n> alike (8-16 GB ram).\n>\nI think that your answer can be found in your statement \"slightly \ndifferent contents\". Planner choices query execution plans basing on \nstatistics obtained during ANALYSE operations, including the autovacuum. \nIn this way, Planner can decide which execution plan is the most \nsuitable. Different content of values in your table could correspond to \ndifferent statistical distribution of values in your columns and of rows \nin your tables, bringing to different choices of the Planner. Execution \ntimes can be very different, also by factor 10-100.\n\nThere is a parameter (stat_target) which set the \"selectivity\" of \nstatistical samples of a table. Maybe, but it's not necessarily true, \nyou could obtain more comparable execution times for the two execution \nplans changing it, probably increasing them.\n\n Giuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 18:55:38 +0200", "msg_from": "Giuseppe Broccolo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reasons for choosing one execution plan over another?" }, { "msg_contents": "On 12/09/13 04:55, Giuseppe Broccolo wrote:\n> Il 11/09/2013 13:16, Mikkel Lauritsen ha scritto:\n>> Hi all,\n>>\n>> I have a number of Postgres 9.2.4 databases with the same schema but \n>> with\n>> slightly different contents, running on small servers that are basically\n>> alike (8-16 GB ram).\n>>\n> I think that your answer can be found in your statement \"slightly \n> different contents\". Planner choices query execution plans basing on \n> statistics obtained during ANALYSE operations, including the \n> autovacuum. In this way, Planner can decide which execution plan is \n> the most suitable. Different content of values in your table could \n> correspond to different statistical distribution of values in your \n> columns and of rows in your tables, bringing to different choices of \n> the Planner. Execution times can be very different, also by factor \n> 10-100.\n>\n> There is a parameter (stat_target) which set the \"selectivity\" of \n> statistical samples of a table. Maybe, but it's not necessarily true, \n> you could obtain more comparable execution times for the two execution \n> plans changing it, probably increasing them.\n>\n> Giuseppe.\n>\nEven identical content could lead to different plans, as the sampling is \ndone randomly (or at least 'randomly' according to the documentation).\n\n\n\n\n\n\nOn 12/09/13 04:55, Giuseppe Broccolo\n wrote:\n\nIl\n 11/09/2013 13:16, Mikkel Lauritsen ha scritto:\n \nHi all,\n \n\n I have a number of Postgres 9.2.4 databases with the same schema\n but with\n \n slightly different contents, running on small servers that are\n basically\n \n alike (8-16 GB ram).\n \n\n\n I think that your answer can be found in your statement \"slightly\n different contents\". Planner choices query execution plans basing\n on statistics obtained during ANALYSE operations, including the\n autovacuum. In this way, Planner can decide which execution plan\n is the most suitable. Different content of values in your table\n could correspond to different statistical distribution of values\n in your columns and of rows in your tables, bringing to different\n choices of the Planner. Execution times can be very different,\n also by factor 10-100.\n \n\n There is a parameter (stat_target) which set the \"selectivity\" of\n statistical samples of a table. Maybe, but it's not necessarily\n true, you could obtain more comparable execution times for the two\n execution plans changing it, probably increasing them.\n \n\n  Giuseppe.\n \n\n\nEven identical content could lead to different\n plans, as the sampling is done randomly (or at least\n 'randomly' according to the documentation).", "msg_date": "Thu, 12 Sep 2013 09:07:58 +1200", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reasons for choosing one execution plan over another?" } ]
[ { "msg_contents": "Hi all,\n\nOn Wed, 11 Sep 2013 18:55:38 +0200, Giuseppe Broccolo\n<[email protected]> wrote:\n> Il 11/09/2013 13:16, Mikkel Lauritsen ha scritto:\n> > Hi all,\n> >\n> > I have a number of Postgres 9.2.4 databases with the same schema but\n> > with slightly different contents, running on small servers that are\n> > basically alike (8-16 GB ram).\n>\n> I think that your answer can be found in your statement \"slightly \n> different contents\".\n\nYup, that's what I've been thinking myself so far - it definitely doesn't\nlook as if I'm hitting a bug, and I've been through the schema so I feel\nreasonably sure that I'm not missing an index.\n\nIn the example from my original mail the i and a tables are identical in\nthe two databases. The slow plan is chosen when the x and e tables contain\n3.2M and 6.2M rows, the fast plan has 12.8M and 17M rows.\n\nSo - does anybody with enough insight in the planner know if it sounds\nlikely that it would choose the given plans in these two cases, or if\nit's more likely that I have a tuning problem that leads to bad planning?\n\nAnd if the different plans are to be expected, is there any way I can hint\nat the planner to make it choose the fast plan in both cases?\n\nFWIW if I do a very simple query like\n\nSELECT e.id FROM e INNER JOIN x USING (id) WHERE e.h_id = 'foo';\n\non the two databases I also end up with two different plans (included\nbelow). Here the execution time difference is much less pronounced (note\nthat the \"fast\" execution is on inferior hardware and with a much larger\nresult), but the way the join is executed is the same as in the initial\nlarger plans. Setting seq_page_cost and random_page_cost to the same\nvalue makes the planner choose the fast plan in both cases, but unfortu-\nnately that has no effect on my initial problem :-/\n\nBest regards & thanks,\n Mikkel Lauritsen\n\n---\n\nFast plan:\n\n Nested Loop (cost=0.00..24523.00 rows=1138 width=39) (actual\ntime=2.546..33.858 rows=1192 loops=1)\n Buffers: shared hit=8991\n -> Index Scan using e_h_id_idx on e (cost=0.00..6171.55 rows=1525\nwidth=39) (actual time=0.053..1.211 rows=1857 loops=1)\n Index Cond: (healthtrack_id =\n'-95674114670403931535179954575983492851'::text)\n Buffers: shared hit=350\n -> Index Only Scan using x_pkey on x (cost=0.00..12.02 rows=1\nwidth=39) (actual time=0.017..0.017 rows=1 loops=1857)\n Index Cond: (id = e.id)\n Heap Fetches: 1192\n Buffers: shared hit=8641\n Total runtime: 34.065 ms\n\n\nSlow plan:\n\n Nested Loop (cost=22.25..7020.66 rows=277 width=39) (actual\ntime=0.298..13.996 rows=228 loops=1)\n Buffers: shared hit=3173\n -> Bitmap Heap Scan on e (cost=22.25..2093.50 rows=537 width=39)\n(actual time=0.219..0.628 rows=697 loops=1)\n Recheck Cond: (healthtrack_id = 'foo'::text)\n Buffers: shared hit=152\n -> Bitmap Index Scan on e_h_id_idx (cost=0.00..22.12 rows=537\nwidth=0) (actual time=0.188..0.188 rows=697 loops=1)\n Index Cond: (h_id = 'foo'::text)\n Buffers: shared hit=9\n -> Index Only Scan using x_pkey on x (cost=0.00..9.17 rows=1\nwidth=39) (actual time=0.018..0.018 rows=0 loops=697)\n Index Cond: (id = e.id)\n Heap Fetches: 228\n Buffers: shared hit=3021\n Total runtime: 14.070 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Sep 2013 21:23:20 +0200", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reasons for choosing one execution plan over\n =?UTF-8?Q?another=3F?=" }, { "msg_contents": "I wrote:\n\n--- snip ---\n\n> So - does anybody with enough insight in the planner know if it sounds\n> likely that it would choose the given plans in these two cases, or if\n> it's more likely that I have a tuning problem that leads to bad\n> planning?\n\nDuh. It suddenly dawned on me that I need to look closer at the plans...\n\nThe big difference in the estimated and actual row count in lines like\n\n-> Nested Loop (cost=0.00..250.78 rows=338 width=47) (actual\ntime=0.100..189.676 rows=187012 loops=1)\n\nindicates that the planner is somehow mislead by the statistics on (at\nleast) one of the tables, right? Any suggestions as to how I go about\ninvestigating that further?\n\nOne thing here that is slightly confusing is the relationship between\nthe estimated row count of 169 in the outer loop and 6059 in the last\nindex scan in the partial plan below. How do they relate to each other?\n\n-> Nested Loop (cost=0.00..452.10 rows=169 width=47) (actual\ntime=0.088..41.244 rows=32863 loops=1)\n -> Nested Loop (cost=0.00..16.55 rows=1 width=39) (actual\ntime=0.031..0.035 rows=1 loops=1)\n -> Index Scan using i_c_id on i (cost=0.00..8.27 rows=1\nwidth=39) (actual time=0.016..0.017 rows=1 loops=1)\n Index Cond: (c = 'bar'::text)\n -> Index Scan using a_i_id_idx on a (cost=0.00..8.27 rows=1\nwidth=78) (actual time=0.012..0.013 rows=1 loops=1)\n Index Cond: (i_id = i.id)\n -> Index Scan using x_a_id_idx on x (cost=0.00..374.95 rows=6059\nwidth=86) (actual time=0.055..27.219 rows=32863 loops=1)\n Index Cond: (a_id = a.id)\n\n\nBest regards & thanks,\n Mikkel Lauritsen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Sep 2013 15:29:28 +0200", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reasons for choosing one execution plan over\n =?UTF-8?Q?another=3F?=" } ]
[ { "msg_contents": "Hi To all Pg performance users,\n\nwe've found a strange behaviour in PostgreSQL 9.1.9.\n_Here' our server not default configuration :_\n\n\ndefault_statistics_target = 100 # pgtune wizard 2011-07-06\nmaintenance_work_mem = 384MB # pgtune wizard 2011-07-06\nconstraint_exclusion = on # pgtune wizard 2011-07-06\ncheckpoint_completion_target = 0.9 # pgtune wizard 2011-07-06\neffective_cache_size = 4608MB # pgtune wizard 2011-07-06\nwork_mem = 36MB # pgtune wizard 2011-07-06\nwal_buffers = 8MB # pgtune wizard 2011-07-06\nshared_buffers = 1024MB # pgtune wizard 2011-07-06\nmax_connections = 200 # pgtune wizard 2011-07-06\nrandom_page_cost = 1.5\ncheckpoint_segments = 20\n\nThe server has 16G ram and 16G swap\n\nHere the story :\n\n\nWe have a table witch store some tree data :\n\nCREATE TABLE rfoade\n(\n rfoade___rforefide character varying(32) NOT NULL, -- Tree Category\n rfoade___rfovdeide character varying(32) NOT NULL, -- Tree NAME\n rfoade_i_rfodstide character varying(32) NOT NULL, -- Element NAME\n rfoadeaxe integer NOT NULL DEFAULT 0, -- ( not interresting here)\n rfoadervs integer NOT NULL, -- Tree revision\n rfoadenpm integer DEFAULT 1, -- ( not interresting here)\n rfoade_s_rfodstide character varying(32) NOT NULL, -- Element Father\n rfoadegch character varying(104) NOT NULL DEFAULT '0'::character \nvarying, -- Left Marker (used for query part of trees)\n rfoadedrt character varying(104) NOT NULL DEFAULT '99999'::character \nvarying, -- Right Marker (used for query part of trees)\n rfoadeniv integer NOT NULL DEFAULT 0, -- Depth in trees\n rfoadetxt character varying(1500), -- Free text\n rfoadenum integer NOT NULL DEFAULT 99999, -- Mathematical data used \nfor generating left and right markers\n rfoadeden integer NOT NULL DEFAULT 999, -- Mathematical data used for \ngenerating left and right markers\n rfoadechm character varying(4000) NOT NULL DEFAULT \n'INVALID'::character varying, -- String with data about path to this node\n rfoadeord integer NOT NULL DEFAULT 999999, -- (order of node in \nbrotherhood)\n CONSTRAINT rfoade_pk PRIMARY KEY (rfoade___rforefide, \nrfoade_i_rfodstide, rfoade___rfovdeide, rfoadervs)\n USING INDEX TABLESPACE tb_index_axabas,\n CONSTRAINT rfoade_fk_ade FOREIGN KEY (rfoade___rforefide, \nrfoade___rfovdeide, rfoade_s_rfodstide, rfoadervs) -- Constraint : \nfather must exist\n REFERENCES rfoade (rfoade___rforefide, rfoade___rfovdeide, \nrfoade_i_rfodstide, rfoadervs) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT rfoade_fk_vde FOREIGN KEY (rfoade___rforefide, \nrfoade___rfovdeide, rfoadervs, rfoadeaxe) -- Constraint : tree must\n REFERENCES rfovde (rfovde___rforefide, rfovdeide, rfovdervs, \nrfovdeaxe) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT rfoade_int CHECK (rfoadedrt::text > rfoadegch::text),\n CONSTRAINT rfoade_ord CHECK (rfoadenum >= rfoadeden)\n)\n\nThis table is storing all trees of 'elements' in different \norganisations, one element can be in many trees\n\nThe query witch lead to the evil behaviour is this one : (\"analyse \nrfoade\" was run just before)\n\ninsert into rfoade ( rfoadechm, rfoadegch, rfoadedrt, rfoadenum, \nrfoadeden, rfoadeniv, rfoade___rforefide, rfoade___rfovdeide, rfoadervs, \nrfoade_i_rfodstide, rfoade_s_rfodstide, rfoadetxt, rfoadenpm, rfoadeord, \nrfoadeaxe)\nSELECT reffils.rfoadechm,\n reffils.rfoadegch,\n reffils.rfoadedrt,\n reffils.rfoadenum,\n reffils.rfoadeden,\n reffils.rfoadeniv,\n reffils.rfoade___rforefide,\n 'ANA_HORS_CARB_COMB',\n 1,\n reffils.rfoade_i_rfodstide,\n reffils.rfoade_s_rfodstide,\n reffils.rfoadetxt,\n reffils.rfoadenpm,\n reffils.rfoadeord,\n reffils.rfoadeaxe\nFROM rfoade ref\n JOIN rfoade reffils\n ON reffils.rfoade___rforefide = 'CHUL'\n AND reffils.rfoade___rfovdeide = 'UF_SA'\n AND reffils.rfoadervs = '1'\n AND reffils.rfoadegch > ref.rfoadegch\n AND reffils.rfoadedrt < ref.rfoadedrt\nWHERE ref.rfoadeniv = 2\n AND ref.rfoade___rforefide = 'CHUL'\n AND ref.rfoade___rfovdeide = 'UF_SA'\n AND ref.rfoadervs = '1'\n AND ref.rfoade_i_rfodstide IN (SELECT rfoade_i_rfodstide\n FROM rfoade cible\n WHERE rfoade___rforefide = 'CHUL'\n AND rfoade___rfovdeide = \n'ANA_HORS_CARB_COMB'\n AND rfoadervs = '1')\n\nThis query means : \"I want to create in tree ANA_HORS_CARB_COMB all \nnodes that are under level 2 of tree UF_SA IF i can found level 2 \nelement in tree ANA_HORS_CARB_COMB)\n\n\nTree ANA_HORS_CARB_COMB contains 5k lines, tree UF_SA contains 3k lines. \nThe whole table with all trees contains 230k lines.\n\n\n_Here the default PLAN :_\nhttp://explain.depesz.com/s/vnkT\n\n*I can't show you the EXPLAIN ANALYSE of this query because when it \nfails, all memory and swap (16G+16G) are used and the query is killed by \nOOM KILLER by linux.*\n\nI tried to use :\n\nset enable_material = false;\n\n_I was suspecting the materialize node to generate the problem, here the \nnew plan : _\n\nhttp://explain.depesz.com/s/k1Y\n\nThe query took 2 seconds without any problems\n\nBut it's not over :\n\ni re-enable materialize (set enable_material = true;)\n\nI rerun the query and it runs well this time ( same first plan ).\n\nSo i get back to my real application launching the query on the same \ndatabase :\n\nthe query fails badly another time ( same first plan ), using all my \nmemory and being killed.\n\nFor the moment, i disabling material to run this query in my app, but i \nquite sure there's something i've missed.\n\nIf any of you have hint about this situation, i would greatly appreciate !\n\n*Thanks for (long) reading !*\n\nSouqui�res Adam\n\n\n\n\n\n\n\n Hi To all Pg performance users,\n\n we've found a strange behaviour in PostgreSQL 9.1.9.\nHere' our server not default configuration :\n\n\n default_statistics_target = 100 # pgtune wizard 2011-07-06\n maintenance_work_mem = 384MB # pgtune wizard 2011-07-06\n constraint_exclusion = on # pgtune wizard 2011-07-06\n checkpoint_completion_target = 0.9 # pgtune wizard 2011-07-06\n effective_cache_size = 4608MB # pgtune wizard 2011-07-06\n work_mem = 36MB # pgtune wizard 2011-07-06\n wal_buffers = 8MB # pgtune wizard 2011-07-06\n shared_buffers = 1024MB # pgtune wizard 2011-07-06\n max_connections = 200 # pgtune wizard 2011-07-06\n random_page_cost = 1.5\n checkpoint_segments = 20\n\n The server has 16G ram and 16G swap\n\n Here the story :\n\n\n We have a table witch store some tree data :\n\n CREATE TABLE rfoade\n (\n   rfoade___rforefide character varying(32) NOT NULL, -- Tree\n Category\n   rfoade___rfovdeide character varying(32) NOT NULL, -- Tree NAME\n   rfoade_i_rfodstide character varying(32) NOT NULL, -- Element NAME\n   rfoadeaxe integer NOT NULL DEFAULT 0, -- ( not interresting here)\n   rfoadervs integer NOT NULL, -- Tree revision\n   rfoadenpm integer DEFAULT 1,  -- ( not interresting here)\n   rfoade_s_rfodstide character varying(32) NOT NULL, -- Element\n Father\n   rfoadegch character varying(104) NOT NULL DEFAULT '0'::character\n varying, -- Left Marker (used for query part of trees)\n   rfoadedrt character varying(104) NOT NULL DEFAULT\n '99999'::character varying, -- Right Marker (used for query part of\n trees)\n   rfoadeniv integer NOT NULL DEFAULT 0, -- Depth in trees\n   rfoadetxt character varying(1500), -- Free text\n   rfoadenum integer NOT NULL DEFAULT 99999, -- Mathematical data\n used for generating left and right markers\n   rfoadeden integer NOT NULL DEFAULT 999, -- Mathematical data used\n for generating left and right markers\n   rfoadechm character varying(4000) NOT NULL DEFAULT\n 'INVALID'::character varying, -- String with data about path to this\n node \n   rfoadeord integer NOT NULL DEFAULT 999999, -- (order of node in\n brotherhood)\n   CONSTRAINT rfoade_pk PRIMARY KEY (rfoade___rforefide,\n rfoade_i_rfodstide, rfoade___rfovdeide, rfoadervs)\n   USING INDEX TABLESPACE tb_index_axabas,\n   CONSTRAINT rfoade_fk_ade FOREIGN KEY (rfoade___rforefide,\n rfoade___rfovdeide, rfoade_s_rfodstide, rfoadervs) -- Constraint :\n father must exist\n       REFERENCES rfoade (rfoade___rforefide, rfoade___rfovdeide,\n rfoade_i_rfodstide, rfoadervs) MATCH SIMPLE \n       ON UPDATE NO ACTION ON DELETE NO ACTION,\n   CONSTRAINT rfoade_fk_vde FOREIGN KEY (rfoade___rforefide,\n rfoade___rfovdeide, rfoadervs, rfoadeaxe) -- Constraint : tree must\n       REFERENCES rfovde (rfovde___rforefide, rfovdeide, rfovdervs,\n rfovdeaxe) MATCH SIMPLE\n       ON UPDATE NO ACTION ON DELETE NO ACTION,\n   CONSTRAINT rfoade_int CHECK (rfoadedrt::text >\n rfoadegch::text),\n   CONSTRAINT rfoade_ord CHECK (rfoadenum >= rfoadeden)\n )\n\n This table is storing all trees of 'elements' in different\n organisations, one element can be in many trees\n\n The query witch lead to the evil behaviour is this one : (\"analyse\n rfoade\" was run just before)\n\n insert into rfoade ( rfoadechm, rfoadegch, rfoadedrt, rfoadenum,\n rfoadeden, rfoadeniv, rfoade___rforefide, rfoade___rfovdeide,\n rfoadervs, rfoade_i_rfodstide, rfoade_s_rfodstide, rfoadetxt,\n rfoadenpm, rfoadeord, rfoadeaxe)\n SELECT reffils.rfoadechm, \n        reffils.rfoadegch, \n        reffils.rfoadedrt, \n        reffils.rfoadenum, \n        reffils.rfoadeden, \n        reffils.rfoadeniv, \n        reffils.rfoade___rforefide, \n        'ANA_HORS_CARB_COMB', \n        1, \n        reffils.rfoade_i_rfodstide, \n        reffils.rfoade_s_rfodstide, \n        reffils.rfoadetxt, \n        reffils.rfoadenpm, \n        reffils.rfoadeord, \n        reffils.rfoadeaxe \n FROM   rfoade ref \n        JOIN rfoade reffils \n          ON reffils.rfoade___rforefide = 'CHUL'\n             AND reffils.rfoade___rfovdeide = 'UF_SA' \n             AND reffils.rfoadervs = '1' \n             AND reffils.rfoadegch > ref.rfoadegch \n             AND reffils.rfoadedrt < ref.rfoadedrt \n WHERE  ref.rfoadeniv = 2 \n        AND ref.rfoade___rforefide = 'CHUL' \n        AND ref.rfoade___rfovdeide = 'UF_SA' \n        AND ref.rfoadervs = '1' \n        AND ref.rfoade_i_rfodstide IN (SELECT rfoade_i_rfodstide \n                                       FROM   rfoade cible \n                                       WHERE  rfoade___rforefide =\n 'CHUL' \n                                              AND rfoade___rfovdeide\n = 'ANA_HORS_CARB_COMB' \n                                              AND rfoadervs = '1') \n\n This query means : \"I want to create in tree ANA_HORS_CARB_COMB all\n nodes that are under level 2 of tree UF_SA IF i can found level 2\n element in tree ANA_HORS_CARB_COMB)\n\n\n Tree ANA_HORS_CARB_COMB contains 5k lines, tree UF_SA contains 3k\n lines. The whole table with all trees contains 230k lines.\n\n\nHere the default PLAN :\nhttp://explain.depesz.com/s/vnkT\n\nI can't show you the EXPLAIN ANALYSE of this query because when\n it fails, all memory and swap (16G+16G) are used and the query is\n killed by OOM KILLER by linux.\n\n I tried to use :\n\n set enable_material = false; \n\nI was suspecting the materialize node to generate the problem,\n here the new plan : \n\nhttp://explain.depesz.com/s/k1Y\n\n The query took 2 seconds without any problems\n\n But it's not over :\n\n i re-enable materialize (set enable_material = true;)\n\n I rerun the query and it runs well this time ( same first plan ).\n\n So i get back to my real application launching the query on the same\n database :\n\n the query fails badly another time ( same first plan ), using all my\n memory and being killed.\n\n For the moment, i disabling material to run this query in my app,\n but i quite sure there's something i've missed.\n\n If any of you have hint about this situation, i would greatly\n appreciate !\n\nThanks for (long) reading !\n\n Souquières Adam", "msg_date": "Thu, 12 Sep 2013 12:14:21 +0200", "msg_from": "Souquieres Adam <[email protected]>", "msg_from_op": true, "msg_subject": "Memory-olic query and Materialize" } ]
[ { "msg_contents": "I'm trying to do a pg_dump of a database, and it more-or-less just sits\nthere doing nothing. \"vmstat 2\" looked like this during pg_dump:\n\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 0 1 51392 1299708 122088 9980572 0 0 1024 0 96 190 2 0\n81 18\n 0 1 51392 1299584 122084 9980692 0 0 1216 0 99 190 1 0\n88 11\n 1 1 51392 1299316 122092 9980712 0 0 1088 6 100 197 1 0\n83 16\n 0 1 51392 1298960 122092 9981408 0 0 1472 0 93 202 1 0\n88 11\n 0 1 51392 1298756 122084 9980804 22 0 534 108 132 264 1 0\n86 13\n 0 1 51392 1300700 122088 9978796 0 0 128 6 62 131 0 0\n87 13\n 0 1 51392 1300756 122084 9979136 0 0 1728 1336 135 223 1 0\n86 13\n 0 1 51392 1300772 122088 9978748 0 0 960 18 97 189 1 0\n87 12\n\nThere was no significant CPU usage. top(1) shows pg_dump using about 14%\nCPU and postmaster using about 4% CPU, and nothing else going on.\n\nI suspected many things (bad battery on 3WARE RAID, rogue process, etc.),\nbut everything reported OK. bonnie++ reported excellent results, exactly\nthe same as when the server was installed. If I stop the pg_dump, restart\nPostgres just for good measure, and run pg_bench, it reports good results:\n\npgbench -U test -c 5 -t 20000\ntps = 2270\n\npgbench -U test -c 10 -t 10000\ntps = 3329\n\npgbench -U test -c 20 -t 5000\nps = 4766\n\npgbench -U test -c 30 -t 3333\ntps = 7309\n\npgbench -U test -c 40 -t 2500\ntps = 7539\n\npgbench -U test -c 50 -t 2000\ntps = 8618\n\nBut if I restart pg_dump, it's as slow as before.\n\nFurthermore, when I do a pg_dump on the second identically-configured\nserver (with the same database schema and almost the same data), and run\n\"vmstat 2\", I get very high througput of the dump, and \"vmstat 2\" shows\nmuch more reasonable results:\n\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n 1 0 303824 49892 521596 10637436 0 0 16706 19654 1078 1013 17 1\n81 1\n 1 0 303824 46808 521584 10641048 0 0 16320 19628 1117 1099 17 1\n81 1\n 2 0 303824 45480 521560 10643572 0 0 17088 19620 869 1015 17 1\n81 1\n 2 0 303824 53540 521564 10639788 0 0 15680 19664 836 975 18 1\n80 1\n 2 0 303824 63524 521500 10632188 0 0 16192 19630 963 1029 16 1\n82 0\n 2 0 303824 64112 521508 10632160 0 0 13126 24570 946 965 16 1\n82 1\n\nOn both servers, I'm sending the output of pg_dump to /tmp to eliminate\npossible network problems. The command is:\n\n pg_dump --format=c --verbose --blobs -U postgres emolecules \\\n >/tmp/emolecules-$server-$date.pg_dump\n2>/emi/logs/backup_$server_emolecules.log &\n\nBoth servers:\n Data: 8-disk RAID10\n WAL: 2-disk RAID1\n Linux: 2-disk RAID1\n 2x4-Core Intel(R) Xeon(R) CPU E5620 @ 2.40GHz\n\n\"Good\" server:\n 8GB RAM\n Postgres 8.4.17\n\n\"Bad\" server\n 12 GB RAM\n Postgres 9.2.1\n\nI don't even know where to look next. What could be making pg_dump so slow?\n\nThanks,\nCraig\n\nI'm trying to do a pg_dump of a database, and it more-or-less just sits there doing nothing.  \"vmstat 2\" looked like this during pg_dump:\nprocs  -----------memory----------  ---swap-- -----io---- -system-- ----cpu---- r  b   swpd    free   buff  cache    si   so    bi    bo   in   cs us sy id wa\n 0  1  51392 1299708 122088 9980572    0    0  1024     0   96  190  2  0 81 18 0  1  51392 1299584 122084 9980692    0    0  1216     0   99  190  1  0 88 11\n 1  1  51392 1299316 122092 9980712    0    0  1088     6  100  197  1  0 83 16 0  1  51392 1298960 122092 9981408    0    0  1472     0   93  202  1  0 88 11\n 0  1  51392 1298756 122084 9980804   22    0   534   108  132  264  1  0 86 13 0  1  51392 1300700 122088 9978796    0    0   128     6   62  131  0  0 87 13\n 0  1  51392 1300756 122084 9979136    0    0  1728  1336  135  223  1  0 86 13 0  1  51392 1300772 122088 9978748    0    0   960    18   97  189  1  0 87 12\nThere was no significant CPU usage. top(1) shows pg_dump using about 14% CPU and postmaster using about 4% CPU, and nothing else going on.\nI suspected many things (bad battery on 3WARE RAID, rogue process, etc.), but everything reported OK.  bonnie++ reported excellent results, exactly the same as when the server was installed.  If I stop the pg_dump, restart Postgres just for good measure, and run pg_bench, it reports good results:\npgbench -U test -c 5 -t 20000tps = 2270pgbench -U test -c 10 -t 10000tps = 3329pgbench -U test -c 20 -t 5000ps = 4766pgbench -U test -c 30 -t 3333tps = 7309pgbench -U test -c 40 -t 2500\ntps = 7539pgbench -U test -c 50 -t 2000tps = 8618But if I restart pg_dump, it's as slow as before.Furthermore, when I do a pg_dump on the second identically-configured server (with the same database schema and almost the same data), and run \"vmstat 2\", I get very high througput of the dump, and \"vmstat 2\" shows much more reasonable results:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa\n 1  0 303824  49892 521596 10637436    0    0 16706 19654 1078 1013 17  1 81  1 1  0 303824  46808 521584 10641048    0    0 16320 19628 1117 1099 17  1 81  1\n 2  0 303824  45480 521560 10643572    0    0 17088 19620  869 1015 17  1 81  1 2  0 303824  53540 521564 10639788    0    0 15680 19664  836  975 18  1 80  1\n 2  0 303824  63524 521500 10632188    0    0 16192 19630  963 1029 16  1 82  0 2  0 303824  64112 521508 10632160    0    0 13126 24570  946  965 16  1 82  1\n On both servers, I'm sending the output of pg_dump to /tmp to eliminate possible network problems.  The command is:\n  pg_dump --format=c --verbose --blobs -U postgres emolecules \\    >/tmp/emolecules-$server-$date.pg_dump 2>/emi/logs/backup_$server_emolecules.log &Both servers:  Data: 8-disk RAID10\n  WAL: 2-disk RAID1  Linux: 2-disk RAID1  2x4-Core Intel(R) Xeon(R) CPU E5620  @ 2.40GHz \"Good\" server:   8GB RAM   Postgres 8.4.17\n\"Bad\" server  12 GB RAM  Postgres 9.2.1I don't even know where to look next.  What could be making pg_dump so slow?Thanks,\nCraig", "msg_date": "Sat, 14 Sep 2013 11:28:36 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Extremely slow server?" }, { "msg_contents": "On Sat, Sep 14, 2013 at 11:28 AM, Craig James <[email protected]> wrote:\n\n> I'm trying to do a pg_dump of a database, and it more-or-less just sits\n> there doing nothing.\n>\n\nWhat is running in the db? Perhaps there is something blocking the pg_dump?\nWhat does the output of the following query look like?\n\nselect * from pg_stat_activity where pid <> pg_backend_pid()\n\nOn Sat, Sep 14, 2013 at 11:28 AM, Craig James <[email protected]> wrote:\nI'm trying to do a pg_dump of a database, and it more-or-less just sits there doing nothing. \nWhat is running in the db? Perhaps there is something blocking the pg_dump? What does the output of the following query look like?\nselect * from pg_stat_activity where pid <> pg_backend_pid()", "msg_date": "Sat, 14 Sep 2013 11:36:04 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow server?" }, { "msg_contents": "On Sat, Sep 14, 2013 at 11:36 AM, bricklen <[email protected]> wrote:\n\n> On Sat, Sep 14, 2013 at 11:28 AM, Craig James <[email protected]>wrote:\n>\n>> I'm trying to do a pg_dump of a database, and it more-or-less just sits\n>> there doing nothing.\n>>\n>\n> What is running in the db? Perhaps there is something blocking the\n> pg_dump? What does the output of the following query look like?\n>\n> select * from pg_stat_activity where pid <> pg_backend_pid()\n>\n>\n=# select * from pg_stat_activity where pid <> pg_backend_pid();\n datid | datname | pid | usesysid | usename | application_name |\nclient_addr | client_hostname | client_port | backend_start\n | xact_start | query_start |\nstate_change | waiting | state |\n query\n--------+------------+-------+----------+----------+------------------+-------------+-----------------+-------------+------------------------------\n-+-------------------------------+------------------------------+-------------------------------+---------+--------+-------------------------------\n-------------------------------------------------------------------------\n 231308 | emolecules | 13312 | 10 | postgres | pg_dump\n| | | -1 | 2013-09-14\n18:37:08.752938-07\n | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:39:43.74618-07 |\n2013-09-14 18:39:43.746181-07 | f | active | COPY\norders.chmoogle_thesaurus\n (thesaurus_id, version_id, normalized, identifier, typecode) TO stdout;\n(1 row)\n\nAnd a bit later:\n\n 231308 | emolecules | 13312 | 10 | postgres | pg_dump\n| | | -1 | 2013-09-14\n18:37:08.752938-07\n | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:47:38.287109-07 |\n2013-09-14 18:47:38.287111-07 | f | active | COPY\norders.customer_order_items (customer_order_item_id, customer_order_id,\norig_smiles, orig_sdf, orig_datatype, orig_catalog_num, orig_rownum,\ncansmiles, version_id, version_smiles, parent_id, match_type, catalogue_id,\nsupplier_id, sample_id, catalog_num, price_code, reason, discount,\nformat_ordered, amount_ordered, units_ordered, format_quoted,\namount_quoted, units_quoted, price_quoted, wholesale, nitems_shipped,\nnitems_analytical, comment, salt_name, salt_ratio, original_order_id,\nprice_quoted_usd, wholesale_usd, invoice_price, hazardous) TO stdout;\n(1 row)\n\nThe Apache web server is shut off, and I upgraded to Postgres 9.2.4 since\nmy first email.\n\ntop(1) reports nothing interesting that I can see:\n\ntop - 18:50:18 up 340 days, 3:28, 4 users, load average: 1.46, 1.40, 1.17\nTasks: 282 total, 1 running, 281 sleeping, 0 stopped, 0 zombie\nCpu(s): 0.2%us, 0.1%sy, 0.0%ni, 86.9%id, 12.7%wa, 0.0%hi, 0.0%si,\n0.0%st\nMem: 12322340k total, 11465028k used, 857312k free, 53028k buffers\nSwap: 19796984k total, 50224k used, 19746760k free, 10724856k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n13311 emi 20 0 28476 8324 1344 S 1 0.1 3:48.90 pg_dump\n13359 emi 20 0 19368 1576 1076 R 0 0.0 0:00.09 top\n 1 root 20 0 23840 1372 688 S 0 0.0 0:03.13 init\n 2 root 20 0 0 0 0 S 0 0.0 0:00.04 kthreadd\n 3 root RT 0 0 0 0 S 0 0.0 0:00.27 migration/0\n 4 root 20 0 0 0 0 S 0 0.0 0:05.85 ksoftirqd/0\n... etc.\n\n\nInterestingly, it starts off going fairly fast according to \"vmstat 2\".\nThis was started almost immediately after pg_dump started. Notice that it\ngoes well for a couple minutes, slows for a bit, speeds up, then drops to\nalmost nothing. It stays that way forever, just not doing anything. See\nbelow.\n\nThanks,\nCraig\n\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 1 0 50492 4129584 53672 7455892 0 0 46 37 0 0 0 0\n99 0\n 1 0 50488 4091536 53672 7494072 0 0 15744 0 389 717 12 0\n87 0\n 3 0 50480 4053868 53672 7531836 0 0 15680 36 408 759 17 0\n83 0\n 1 0 50480 4016252 53680 7569636 0 0 15580 6 379 700 12 0\n87 0\n 2 0 50480 3979140 53680 7606800 0 0 15360 0 372 682 17 0\n82 0\n 1 1 50480 3943536 53696 7642740 0 0 14720 68846 2076 1343 11 1\n81 7\n 1 0 50448 3906040 53700 7680044 0 0 15488 2 456 760 18 0\n80 1\n 1 0 50424 3866036 53704 7720436 0 0 16818 0 389 713 13 1\n86 0\n 1 0 50424 3813564 53712 7772656 0 0 21952 16 424 772 18 1\n81 0\n 1 0 50424 3762460 53712 7823840 0 0 21376 0 414 761 13 0\n87 0\n 1 0 50424 3710416 53720 7875968 0 0 22080 6 420 777 18 0\n82 0\n 1 0 50424 3660616 53720 7925828 0 0 20672 0 412 777 11 0\n88 0\n 1 0 50424 3611208 53720 7975484 0 0 20608 0 411 753 17 0\n83 0\n 1 0 50424 3566916 53728 8019580 0 0 18176 14 406 743 14 0\n86 0\n 1 0 50424 3514708 53728 8071780 0 0 21760 0 427 791 17 1\n83 0\n 0 1 50424 3478924 53736 8107700 0 0 15360 6 323 592 9 0\n87 3\n 0 1 50424 3478636 53736 8108040 0 0 960 0 95 164 2 0\n81 17\n 2 0 50424 3456340 52604 8130768 0 0 10818 36 365 532 6 0\n89 5\n 1 0 50424 3413248 52580 8173744 0 0 21504 6 612 814 19 1\n81 0\n 2 0 50424 3376956 52580 8210000 0 0 18368 0 554 759 15 0\n85 0\n 2 1 50424 3338208 52600 8249304 0 0 19456 21360 848 800 18 0\n81 0\n 2 0 50424 3304936 52600 8283196 0 0 16896 31532 1238 759 13 0\n84 3\n 2 0 50424 3265300 52600 8323020 0 0 19776 0 581 780 18 0\n82 0\n 3 0 50424 3218968 52608 8369276 0 0 23104 18 633 843 16 1\n83 0\n 1 0 50424 3177500 52608 8410292 0 0 20544 0 583 761 15 0\n85 0\n 2 0 50424 3133868 52608 8453808 0 0 21760 22 609 807 16 1\n83 0\n 1 0 50424 3094220 52608 8493780 0 0 19840 0 576 765 18 1\n81 0\n 2 0 50424 3054492 52608 8533436 0 0 19712 0 564 748 14 0\n85 0\n 1 0 50424 3015664 52624 8572184 0 0 19264 12 562 755 18 0\n82 0\n 0 1 50380 2946524 52628 8641404 0 0 34734 0 856 1035 15 1\n83 1\n 2 0 50380 2882988 52636 8704484 0 0 28784 10 757 853 15 0\n84 1\n 1 0 50376 2816264 52636 8771276 0 0 29632 4 751 842 15 0\n84 1\n 1 0 50376 2743160 52636 8843216 0 0 32832 36 817 910 16 0\n83 0\n 1 0 50372 2655072 52644 8931240 0 0 40260 6 912 892 17 1\n82 1\n 2 0 50356 2531112 52644 9055776 0 0 58560 0 1211 940 15 1\n84 0\n 2 0 50356 2406352 52660 9180476 0 0 58600 30 1214 945 18 1\n80 1\n 2 0 50356 2303256 52664 9284364 0 0 48512 55414 2022 883 15 1\n80 4\n 1 0 50356 2174184 52664 9413440 0 0 60608 0 1240 941 17 1\n82 0\n 1 0 50356 2091428 52672 9495784 0 0 37652 20 858 758 14 0\n86 0\n 1 0 50356 2022020 52672 9565420 0 0 31296 0 748 694 16 1\n83 0\n 1 0 50356 1953052 52680 9634684 0 0 31168 48 759 709 14 0\n86 0\n 1 1 50356 1903932 52700 9682092 10 0 21024 0 632 813 11 0\n85 3\n 1 0 50356 1862844 52700 9723072 0 0 20138 0 702 981 15 1\n78 6\n 1 0 50356 1809516 52716 9776556 0 0 22400 34 619 783 15 0\n85 0\n 1 0 50348 1755744 52716 9830160 0 0 22208 0 605 774 14 0\n86 0\n 2 0 50336 1702324 52724 9884184 0 0 22400 6 600 773 16 0\n84 0\n 1 0 50336 1648008 52732 9938304 0 0 22592 6 623 796 14 1\n85 0\n 2 0 50336 1592896 52732 9993144 0 0 23424 36 658 861 15 0\n84 0\n 1 0 50336 1547552 52744 10038416 0 0 19264 66 1045 980 12 0\n85 3\n 2 0 50336 1490988 52744 10095964 0 0 26116 0 676 732 15 1\n80 3\n 1 0 50336 1434456 52760 10151276 0 0 26600 32 697 758 13 1\n86 0\n 1 2 50336 1408144 52768 10177992 0 0 11270 52636 1296 819 13 0\n82 5\n 1 0 50336 1369920 52768 10216120 0 0 15872 3710 702 681 15 0\n84 1\n 1 0 50336 1331664 52776 10254604 0 0 16000 18 501 687 16 0\n84 0\n 1 0 50328 1293004 52776 10293184 0 0 15936 0 504 670 13 0\n87 0\n 1 0 50268 1254560 52784 10331812 0 0 15936 18 512 680 16 0\n83 0\n 1 0 50268 1216276 52784 10370156 0 0 15872 0 514 688 14 0\n86 0\n 1 0 50252 1177600 52784 10408532 0 0 16000 0 502 672 15 0\n85 0\n 1 0 50244 1138936 52796 10447412 0 0 16072 8 512 680 14 0\n86 0\n 0 1 50220 1109028 52796 10476884 0 0 12608 18 423 566 11 0\n85 3\n 0 1 50220 1109912 52804 10476520 0 0 704 10 100 196 1 0\n84 14\n 0 1 50220 1107952 52800 10478324 0 0 1728 0 127 260 2 0\n88 10\n 1 0 50220 1088676 52800 10497236 0 0 8832 36 336 513 7 0\n86 6\n 1 0 50220 1038608 52808 10547152 0 0 20856 146 610 763 15 1\n84 0\n 1 0 50220 985768 52808 10600160 0 0 22336 0 613 774 13 0\n86 0\n 1 0 50220 933764 52824 10652336 0 0 21888 28 601 767 16 0\n84 0\n 1 0 50220 883664 52824 10702216 0 0 20800 0 589 752 14 1\n86 0\n 0 1 50220 879304 52808 10706896 0 0 3584 44682 928 335 4 0\n84 12\n 0 2 50220 877780 52816 10707928 0 0 1408 16 110 188 2 0\n84 14\n 0 1 50220 877156 52816 10709232 0 0 1472 0 112 196 2 0\n87 11\n 0 1 50220 876540 52804 10710044 0 0 1408 20 119 213 1 0\n85 14\n 0 1 50220 876104 52812 10710008 0 0 1024 6 117 234 1 0\n89 10\n 0 1 50220 875736 52812 10710072 0 0 896 0 102 197 1 0\n84 15\n 0 1 50220 875248 52820 10710856 0 0 1280 8 128 229 2 0\n85 13\n 2 1 50220 874628 52820 10711556 0 0 1216 0 110 198 2 0\n88 10\n 1 1 50220 873992 52820 10712288 0 0 1280 0 103 189 3 0\n83 14\n 0 1 50220 875564 52824 10710784 0 0 384 6 113 207 1 0\n86 13\n 0 1 50220 875460 52824 10710448 0 0 1216 36 142 263 2 0\n86 12\n 1 1 50220 873852 52820 10712376 0 0 1856 0 143 255 3 0\n85 12\n 0 1 50220 873752 52828 10712144 0 0 832 6 112 210 2 0\n86 12\n 1 1 50220 873336 52832 10712884 0 0 1280 20 112 207 2 0\n86 12\n 0 1 50220 872588 52840 10713684 0 0 1280 6 119 214 2 0\n87 11\n 0 1 50220 872276 52840 10713944 0 0 1152 3884 181 184 2 0\n86 13\n 0 1 50220 871672 52836 10714584 0 0 1088 4 91 169 1 0\n85 13\n 0 1 50220 871504 52844 10714636 0 0 1088 14 116 200 2 0\n86 13\n 1 1 50220 871040 52852 10715800 0 0 1280 34 118 217 2 0\n87 12\n 0 1 50220 870308 52852 10716292 0 0 1088 0 115 204 1 0\n87 12\n 0 1 50220 869972 52860 10716428 0 0 1216 6 103 190 2 0\n85 14\n 0 1 50220 869492 52864 10716792 0 0 1024 2 98 179 2 0\n85 13\n 0 1 50220 869372 52872 10716676 0 0 960 6 108 204 1 0\n88 11\n 0 1 50220 869076 52872 10717008 0 0 1088 0 107 187 2 0\n84 14\n 0 1 50220 868380 52864 10717668 0 0 1152 0 116 203 1 0\n88 11\n 0 1 50220 867384 52872 10718000 0 0 1026 42 142 277 1 0\n87 12\n 0 1 50220 866332 52872 10719644 0 0 1600 0 118 205 2 0\n83 14\n 1 0 50220 866424 52880 10719784 0 0 1216 6 100 184 1 0\n86 12\n 0 1 50220 865836 52880 10720324 0 0 1216 20 101 197 1 0\n87 12\n 0 1 50220 865524 52880 10720592 0 0 960 0 113 207 1 0\n85 14\n\nOn Sat, Sep 14, 2013 at 11:36 AM, bricklen <[email protected]> wrote:\nOn Sat, Sep 14, 2013 at 11:28 AM, Craig James <[email protected]> wrote:\nI'm trying to do a pg_dump of a database, and it more-or-less just sits there doing nothing. \nWhat is running in the db? Perhaps there is something blocking the pg_dump? What does the output of the following query look like?\nselect * from pg_stat_activity where pid <> pg_backend_pid()\n=# select * from pg_stat_activity where pid <> pg_backend_pid(); datid  |  datname   |  pid  | usesysid | usename  | application_name | client_addr | client_hostname | client_port |         backend_start        \n |          xact_start           |         query_start          |         state_change          | waiting | state  |                                                 query                                                  \n--------+------------+-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+------------------------------+-------------------------------+---------+--------+-------------------------------\n------------------------------------------------------------------------- 231308 | emolecules | 13312 |       10 | postgres | pg_dump          |             |                 |          -1 | 2013-09-14 18:37:08.752938-07\n | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:39:43.74618-07 | 2013-09-14 18:39:43.746181-07 | f       | active | COPY orders.chmoogle_thesaurus (thesaurus_id, version_id, normalized, identifier, typecode) TO stdout;\n(1 row)And a bit later: 231308 | emolecules | 13312 |       10 | postgres | pg_dump          |             |                 |          -1 | 2013-09-14 18:37:08.752938-07\n | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:47:38.287109-07 | 2013-09-14 18:47:38.287111-07 | f       | active | COPY orders.customer_order_items (customer_order_item_id, customer_order_id, orig_smiles, orig_sdf, orig_datatype, orig_catalog_num, orig_rownum, cansmiles, version_id, version_smiles, parent_id, match_type, catalogue_id, supplier_id, sample_id, catalog_num, price_code, reason, discount, format_ordered, amount_ordered, units_ordered, format_quoted, amount_quoted, units_quoted, price_quoted, wholesale, nitems_shipped, nitems_analytical, comment, salt_name, salt_ratio, original_order_id, price_quoted_usd, wholesale_usd, invoice_price, hazardous) TO stdout;\n(1 row)The Apache web server is shut off, and I upgraded to Postgres 9.2.4 since my first email.top(1) reports nothing interesting that I can see:\ntop - 18:50:18 up 340 days,  3:28,  4 users,  load average: 1.46, 1.40, 1.17Tasks: 282 total,   1 running, 281 sleeping,   0 stopped,   0 zombie\nCpu(s):  0.2%us,  0.1%sy,  0.0%ni, 86.9%id, 12.7%wa,  0.0%hi,  0.0%si,  0.0%stMem:  12322340k total, 11465028k used,   857312k free,    53028k buffers\nSwap: 19796984k total,    50224k used, 19746760k free, 10724856k cached  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n13311 emi       20   0 28476 8324 1344 S    1  0.1   3:48.90 pg_dump13359 emi       20   0 19368 1576 1076 R    0  0.0   0:00.09 top\n    1 root      20   0 23840 1372  688 S    0  0.0   0:03.13 init    2 root      20   0     0    0    0 S    0  0.0   0:00.04 kthreadd   \n    3 root      RT   0     0    0    0 S    0  0.0   0:00.27 migration/0    4 root      20   0     0    0    0 S    0  0.0   0:05.85 ksoftirqd/0\n... etc.\nInterestingly, it starts off going fairly fast according to \"vmstat 2\".  This was started almost immediately after pg_dump started.  Notice that it goes well for a couple minutes, slows for a bit, speeds up, then drops to almost nothing.  It stays that way forever, just not doing anything.  See below.\nThanks,Craigprocs  -----------memory----------  ---swap-- -----io---- -system-- ----cpu----\n r  b   swpd    free   buff   cache   si   so    bi    bo   in   cs us sy id wa 1  0  50492 4129584  53672 7455892    0    0    46    37    0    0  0  0 99  0 1  0  50488 4091536  53672 7494072    0    0 15744     0  389  717 12  0 87  0\n 3  0  50480 4053868  53672 7531836    0    0 15680    36  408  759 17  0 83  0 1  0  50480 4016252  53680 7569636    0    0 15580     6  379  700 12  0 87  0 2  0  50480 3979140  53680 7606800    0    0 15360     0  372  682 17  0 82  0\n 1  1  50480 3943536  53696 7642740    0    0 14720 68846 2076 1343 11  1 81  7 1  0  50448 3906040  53700 7680044    0    0 15488     2  456  760 18  0 80  1 1  0  50424 3866036  53704 7720436    0    0 16818     0  389  713 13  1 86  0\n 1  0  50424 3813564  53712 7772656    0    0 21952    16  424  772 18  1 81  0 1  0  50424 3762460  53712 7823840    0    0 21376     0  414  761 13  0 87  0 1  0  50424 3710416  53720 7875968    0    0 22080     6  420  777 18  0 82  0\n 1  0  50424 3660616  53720 7925828    0    0 20672     0  412  777 11  0 88  0 1  0  50424 3611208  53720 7975484    0    0 20608     0  411  753 17  0 83  0 1  0  50424 3566916  53728 8019580    0    0 18176    14  406  743 14  0 86  0\n 1  0  50424 3514708  53728 8071780    0    0 21760     0  427  791 17  1 83  0 0  1  50424 3478924  53736 8107700    0    0 15360     6  323  592  9  0 87  3 0  1  50424 3478636  53736 8108040    0    0   960     0   95  164  2  0 81 17\n 2  0  50424 3456340  52604 8130768    0    0 10818    36  365  532  6  0 89  5 1  0  50424 3413248  52580 8173744    0    0 21504     6  612  814 19  1 81  0 2  0  50424 3376956  52580 8210000    0    0 18368     0  554  759 15  0 85  0\n 2  1  50424 3338208  52600 8249304    0    0 19456 21360  848  800 18  0 81  0 2  0  50424 3304936  52600 8283196    0    0 16896 31532 1238  759 13  0 84  3 2  0  50424 3265300  52600 8323020    0    0 19776     0  581  780 18  0 82  0\n 3  0  50424 3218968  52608 8369276    0    0 23104    18  633  843 16  1 83  0 1  0  50424 3177500  52608 8410292    0    0 20544     0  583  761 15  0 85  0 2  0  50424 3133868  52608 8453808    0    0 21760    22  609  807 16  1 83  0\n 1  0  50424 3094220  52608 8493780    0    0 19840     0  576  765 18  1 81  0 2  0  50424 3054492  52608 8533436    0    0 19712     0  564  748 14  0 85  0 1  0  50424 3015664  52624 8572184    0    0 19264    12  562  755 18  0 82  0\n 0  1  50380 2946524  52628 8641404    0    0 34734     0  856 1035 15  1 83  1 2  0  50380 2882988  52636 8704484    0    0 28784    10  757  853 15  0 84  1 1  0  50376 2816264  52636 8771276    0    0 29632     4  751  842 15  0 84  1\n 1  0  50376 2743160  52636 8843216    0    0 32832    36  817  910 16  0 83  0 1  0  50372 2655072  52644 8931240    0    0 40260     6  912  892 17  1 82  1 2  0  50356 2531112  52644 9055776    0    0 58560     0 1211  940 15  1 84  0\n 2  0  50356 2406352  52660 9180476    0    0 58600    30 1214  945 18  1 80  1 2  0  50356 2303256  52664 9284364    0    0 48512 55414 2022  883 15  1 80  4 1  0  50356 2174184  52664 9413440    0    0 60608     0 1240  941 17  1 82  0\n 1  0  50356 2091428  52672 9495784    0    0 37652    20  858  758 14  0 86  0 1  0  50356 2022020  52672 9565420    0    0 31296     0  748  694 16  1 83  0 1  0  50356 1953052  52680 9634684    0    0 31168    48  759  709 14  0 86  0\n 1  1  50356 1903932  52700 9682092   10    0 21024     0  632  813 11  0 85  3 1  0  50356 1862844  52700 9723072    0    0 20138     0  702  981 15  1 78  6 1  0  50356 1809516  52716 9776556    0    0 22400    34  619  783 15  0 85  0\n 1  0  50348 1755744  52716 9830160    0    0 22208     0  605  774 14  0 86  0 2  0  50336 1702324  52724 9884184    0    0 22400     6  600  773 16  0 84  0 1  0  50336 1648008  52732 9938304    0    0 22592     6  623  796 14  1 85  0\n 2  0  50336 1592896  52732 9993144    0    0 23424    36  658  861 15  0 84  0 1  0  50336 1547552  52744 10038416    0    0 19264    66 1045  980 12  0 85  3 2  0  50336 1490988  52744 10095964    0    0 26116     0  676  732 15  1 80  3\n 1  0  50336 1434456  52760 10151276    0    0 26600    32  697  758 13  1 86  0 1  2  50336 1408144  52768 10177992    0    0 11270 52636 1296  819 13  0 82  5 1  0  50336 1369920  52768 10216120    0    0 15872  3710  702  681 15  0 84  1\n 1  0  50336 1331664  52776 10254604    0    0 16000    18  501  687 16  0 84  0 1  0  50328 1293004  52776 10293184    0    0 15936     0  504  670 13  0 87  0 1  0  50268 1254560  52784 10331812    0    0 15936    18  512  680 16  0 83  0\n 1  0  50268 1216276  52784 10370156    0    0 15872     0  514  688 14  0 86  0 1  0  50252 1177600  52784 10408532    0    0 16000     0  502  672 15  0 85  0 1  0  50244 1138936  52796 10447412    0    0 16072     8  512  680 14  0 86  0\n 0  1  50220 1109028  52796 10476884    0    0 12608    18  423  566 11  0 85  3 0  1  50220 1109912  52804 10476520    0    0   704    10  100  196  1  0 84 14 0  1  50220 1107952  52800 10478324    0    0  1728     0  127  260  2  0 88 10\n 1  0  50220 1088676  52800 10497236    0    0  8832    36  336  513  7  0 86  6 1  0  50220 1038608  52808 10547152    0    0 20856   146  610  763 15  1 84  0 1  0  50220 985768  52808 10600160    0    0 22336     0  613  774 13  0 86  0\n 1  0  50220 933764  52824 10652336    0    0 21888    28  601  767 16  0 84  0 1  0  50220 883664  52824 10702216    0    0 20800     0  589  752 14  1 86  0 0  1  50220 879304  52808 10706896    0    0  3584 44682  928  335  4  0 84 12\n 0  2  50220 877780  52816 10707928    0    0  1408    16  110  188  2  0 84 14 0  1  50220 877156  52816 10709232    0    0  1472     0  112  196  2  0 87 11 0  1  50220 876540  52804 10710044    0    0  1408    20  119  213  1  0 85 14\n 0  1  50220 876104  52812 10710008    0    0  1024     6  117  234  1  0 89 10 0  1  50220 875736  52812 10710072    0    0   896     0  102  197  1  0 84 15 0  1  50220 875248  52820 10710856    0    0  1280     8  128  229  2  0 85 13\n 2  1  50220 874628  52820 10711556    0    0  1216     0  110  198  2  0 88 10 1  1  50220 873992  52820 10712288    0    0  1280     0  103  189  3  0 83 14 0  1  50220 875564  52824 10710784    0    0   384     6  113  207  1  0 86 13\n 0  1  50220 875460  52824 10710448    0    0  1216    36  142  263  2  0 86 12 1  1  50220 873852  52820 10712376    0    0  1856     0  143  255  3  0 85 12 0  1  50220 873752  52828 10712144    0    0   832     6  112  210  2  0 86 12\n 1  1  50220 873336  52832 10712884    0    0  1280    20  112  207  2  0 86 12 0  1  50220 872588  52840 10713684    0    0  1280     6  119  214  2  0 87 11 0  1  50220 872276  52840 10713944    0    0  1152  3884  181  184  2  0 86 13\n 0  1  50220 871672  52836 10714584    0    0  1088     4   91  169  1  0 85 13 0  1  50220 871504  52844 10714636    0    0  1088    14  116  200  2  0 86 13 1  1  50220 871040  52852 10715800    0    0  1280    34  118  217  2  0 87 12\n 0  1  50220 870308  52852 10716292    0    0  1088     0  115  204  1  0 87 12 0  1  50220 869972  52860 10716428    0    0  1216     6  103  190  2  0 85 14 0  1  50220 869492  52864 10716792    0    0  1024     2   98  179  2  0 85 13\n 0  1  50220 869372  52872 10716676    0    0   960     6  108  204  1  0 88 11 0  1  50220 869076  52872 10717008    0    0  1088     0  107  187  2  0 84 14 0  1  50220 868380  52864 10717668    0    0  1152     0  116  203  1  0 88 11\n 0  1  50220 867384  52872 10718000    0    0  1026    42  142  277  1  0 87 12 0  1  50220 866332  52872 10719644    0    0  1600     0  118  205  2  0 83 14 1  0  50220 866424  52880 10719784    0    0  1216     6  100  184  1  0 86 12\n 0  1  50220 865836  52880 10720324    0    0  1216    20  101  197  1  0 87 12 0  1  50220 865524  52880 10720592    0    0   960     0  113  207  1  0 85 14", "msg_date": "Sat, 14 Sep 2013 18:54:19 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely slow server?" }, { "msg_contents": "On Sat, Sep 14, 2013 at 6:54 PM, Craig James <[email protected]> wrote:\n\n> On Sat, Sep 14, 2013 at 11:36 AM, bricklen <[email protected]> wrote:\n>\n>> On Sat, Sep 14, 2013 at 11:28 AM, Craig James <[email protected]>wrote:\n>>\n>>> I'm trying to do a pg_dump of a database, and it more-or-less just sits\n>>> there doing nothing.\n>>>\n>>\n>> What is running in the db? Perhaps there is something blocking the\n>> pg_dump? What does the output of the following query look like?\n>>\n>> select * from pg_stat_activity where pid <> pg_backend_pid()\n>>\n>>\n> =# select * from pg_stat_activity where pid <> pg_backend_pid();\n> datid | datname | pid | usesysid | usename | application_name |\n> client_addr | client_hostname | client_port | backend_start\n> | xact_start | query_start |\n> state_change | waiting | state |\n> query\n>\n> --------+------------+-------+----------+----------+------------------+-------------+-----------------+-------------+------------------------------\n>\n> -+-------------------------------+------------------------------+-------------------------------+---------+--------+-------------------------------\n> -------------------------------------------------------------------------\n> 231308 | emolecules | 13312 | 10 | postgres | pg_dump\n> | | | -1 | 2013-09-14\n> 18:37:08.752938-07\n> | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:39:43.74618-07 |\n> 2013-09-14 18:39:43.746181-07 | f | active | COPY\n> orders.chmoogle_thesaurus\n> (thesaurus_id, version_id, normalized, identifier, typecode) TO stdout;\n>\n\n\nI don't have any solutions at the moment, but three things come to mind:\n\n1). Try without \"--blobs\",\n2). Does \"strace -p <pid of pg_dump process>\" show anything unusual? Futex?\nLots of semops?\n3). Does \"pg_dump -f your_file.out -U postgres -Fc emolecules\" act any\ndifferent than redirecting STDOUT to a file?\n\nOn Sat, Sep 14, 2013 at 6:54 PM, Craig James <[email protected]> wrote:\nOn Sat, Sep 14, 2013 at 11:36 AM, bricklen <[email protected]> wrote:\nOn Sat, Sep 14, 2013 at 11:28 AM, Craig James <[email protected]> wrote:\nI'm trying to do a pg_dump of a database, and it more-or-less just sits there doing nothing. \nWhat is running in the db? Perhaps there is something blocking the pg_dump? What does the output of the following query look like?\nselect * from pg_stat_activity where pid <> pg_backend_pid()\n=# select * from pg_stat_activity where pid <> pg_backend_pid(); datid  |  datname   |  pid  | usesysid | usename  | application_name | client_addr | client_hostname | client_port |         backend_start        \n\n |          xact_start           |         query_start          |         state_change          | waiting | state  |                                                 query                                                  \n\n--------+------------+-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+------------------------------+-------------------------------+---------+--------+-------------------------------\n\n------------------------------------------------------------------------- 231308 | emolecules | 13312 |       10 | postgres | pg_dump          |             |                 |          -1 | 2013-09-14 18:37:08.752938-07\n\n | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:39:43.74618-07 | 2013-09-14 18:39:43.746181-07 | f       | active | COPY orders.chmoogle_thesaurus (thesaurus_id, version_id, normalized, identifier, typecode) TO stdout;\nI don't have any solutions at the moment, but three things come to mind:1). Try without \"--blobs\",\n2). Does \"strace -p <pid of pg_dump process>\" show anything unusual? Futex? Lots of semops?3). Does \"pg_dump -f your_file.out -U postgres -Fc emolecules\" act any different than redirecting STDOUT to a file?", "msg_date": "Sat, 14 Sep 2013 19:06:03 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow server?" }, { "msg_contents": "Can it be hardware problem with io? Try finding out which file the stuck\ntable is and do a simple fs copy. Or simply do a copy of the whole pg data\ndirectory.\n15 вер. 2013 04:54, \"Craig James\" <[email protected]> напис.\n\n> On Sat, Sep 14, 2013 at 11:36 AM, bricklen <[email protected]> wrote:\n>\n>> On Sat, Sep 14, 2013 at 11:28 AM, Craig James <[email protected]>wrote:\n>>\n>>> I'm trying to do a pg_dump of a database, and it more-or-less just sits\n>>> there doing nothing.\n>>>\n>>\n>> What is running in the db? Perhaps there is something blocking the\n>> pg_dump? What does the output of the following query look like?\n>>\n>> select * from pg_stat_activity where pid <> pg_backend_pid()\n>>\n>>\n> =# select * from pg_stat_activity where pid <> pg_backend_pid();\n> datid | datname | pid | usesysid | usename | application_name |\n> client_addr | client_hostname | client_port | backend_start\n> | xact_start | query_start |\n> state_change | waiting | state |\n> query\n>\n> --------+------------+-------+----------+----------+------------------+-------------+-----------------+-------------+------------------------------\n>\n> -+-------------------------------+------------------------------+-------------------------------+---------+--------+-------------------------------\n> -------------------------------------------------------------------------\n> 231308 | emolecules | 13312 | 10 | postgres | pg_dump\n> | | | -1 | 2013-09-14\n> 18:37:08.752938-07\n> | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:39:43.74618-07 |\n> 2013-09-14 18:39:43.746181-07 | f | active | COPY\n> orders.chmoogle_thesaurus\n> (thesaurus_id, version_id, normalized, identifier, typecode) TO stdout;\n> (1 row)\n>\n> And a bit later:\n>\n> 231308 | emolecules | 13312 | 10 | postgres | pg_dump\n> | | | -1 | 2013-09-14\n> 18:37:08.752938-07\n> | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:47:38.287109-07 |\n> 2013-09-14 18:47:38.287111-07 | f | active | COPY\n> orders.customer_order_items (customer_order_item_id, customer_order_id,\n> orig_smiles, orig_sdf, orig_datatype, orig_catalog_num, orig_rownum,\n> cansmiles, version_id, version_smiles, parent_id, match_type, catalogue_id,\n> supplier_id, sample_id, catalog_num, price_code, reason, discount,\n> format_ordered, amount_ordered, units_ordered, format_quoted,\n> amount_quoted, units_quoted, price_quoted, wholesale, nitems_shipped,\n> nitems_analytical, comment, salt_name, salt_ratio, original_order_id,\n> price_quoted_usd, wholesale_usd, invoice_price, hazardous) TO stdout;\n> (1 row)\n>\n> The Apache web server is shut off, and I upgraded to Postgres 9.2.4 since\n> my first email.\n>\n> top(1) reports nothing interesting that I can see:\n>\n> top - 18:50:18 up 340 days, 3:28, 4 users, load average: 1.46, 1.40,\n> 1.17\n> Tasks: 282 total, 1 running, 281 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 86.9%id, 12.7%wa, 0.0%hi, 0.0%si,\n> 0.0%st\n> Mem: 12322340k total, 11465028k used, 857312k free, 53028k buffers\n> Swap: 19796984k total, 50224k used, 19746760k free, 10724856k cached\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 13311 emi 20 0 28476 8324 1344 S 1 0.1 3:48.90 pg_dump\n> 13359 emi 20 0 19368 1576 1076 R 0 0.0 0:00.09 top\n> 1 root 20 0 23840 1372 688 S 0 0.0 0:03.13 init\n> 2 root 20 0 0 0 0 S 0 0.0 0:00.04 kthreadd\n> 3 root RT 0 0 0 0 S 0 0.0 0:00.27 migration/0\n> 4 root 20 0 0 0 0 S 0 0.0 0:05.85 ksoftirqd/0\n> ... etc.\n>\n>\n> Interestingly, it starts off going fairly fast according to \"vmstat 2\".\n> This was started almost immediately after pg_dump started. Notice that it\n> goes well for a couple minutes, slows for a bit, speeds up, then drops to\n> almost nothing. It stays that way forever, just not doing anything. See\n> below.\n>\n> Thanks,\n> Craig\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa\n> 1 0 50492 4129584 53672 7455892 0 0 46 37 0 0 0 0\n> 99 0\n> 1 0 50488 4091536 53672 7494072 0 0 15744 0 389 717 12 0\n> 87 0\n> 3 0 50480 4053868 53672 7531836 0 0 15680 36 408 759 17 0\n> 83 0\n> 1 0 50480 4016252 53680 7569636 0 0 15580 6 379 700 12 0\n> 87 0\n> 2 0 50480 3979140 53680 7606800 0 0 15360 0 372 682 17 0\n> 82 0\n> 1 1 50480 3943536 53696 7642740 0 0 14720 68846 2076 1343 11 1\n> 81 7\n> 1 0 50448 3906040 53700 7680044 0 0 15488 2 456 760 18 0\n> 80 1\n> 1 0 50424 3866036 53704 7720436 0 0 16818 0 389 713 13 1\n> 86 0\n> 1 0 50424 3813564 53712 7772656 0 0 21952 16 424 772 18 1\n> 81 0\n> 1 0 50424 3762460 53712 7823840 0 0 21376 0 414 761 13 0\n> 87 0\n> 1 0 50424 3710416 53720 7875968 0 0 22080 6 420 777 18 0\n> 82 0\n> 1 0 50424 3660616 53720 7925828 0 0 20672 0 412 777 11 0\n> 88 0\n> 1 0 50424 3611208 53720 7975484 0 0 20608 0 411 753 17 0\n> 83 0\n> 1 0 50424 3566916 53728 8019580 0 0 18176 14 406 743 14 0\n> 86 0\n> 1 0 50424 3514708 53728 8071780 0 0 21760 0 427 791 17 1\n> 83 0\n> 0 1 50424 3478924 53736 8107700 0 0 15360 6 323 592 9 0\n> 87 3\n> 0 1 50424 3478636 53736 8108040 0 0 960 0 95 164 2 0\n> 81 17\n> 2 0 50424 3456340 52604 8130768 0 0 10818 36 365 532 6 0\n> 89 5\n> 1 0 50424 3413248 52580 8173744 0 0 21504 6 612 814 19 1\n> 81 0\n> 2 0 50424 3376956 52580 8210000 0 0 18368 0 554 759 15 0\n> 85 0\n> 2 1 50424 3338208 52600 8249304 0 0 19456 21360 848 800 18 0\n> 81 0\n> 2 0 50424 3304936 52600 8283196 0 0 16896 31532 1238 759 13 0\n> 84 3\n> 2 0 50424 3265300 52600 8323020 0 0 19776 0 581 780 18 0\n> 82 0\n> 3 0 50424 3218968 52608 8369276 0 0 23104 18 633 843 16 1\n> 83 0\n> 1 0 50424 3177500 52608 8410292 0 0 20544 0 583 761 15 0\n> 85 0\n> 2 0 50424 3133868 52608 8453808 0 0 21760 22 609 807 16 1\n> 83 0\n> 1 0 50424 3094220 52608 8493780 0 0 19840 0 576 765 18 1\n> 81 0\n> 2 0 50424 3054492 52608 8533436 0 0 19712 0 564 748 14 0\n> 85 0\n> 1 0 50424 3015664 52624 8572184 0 0 19264 12 562 755 18 0\n> 82 0\n> 0 1 50380 2946524 52628 8641404 0 0 34734 0 856 1035 15 1\n> 83 1\n> 2 0 50380 2882988 52636 8704484 0 0 28784 10 757 853 15 0\n> 84 1\n> 1 0 50376 2816264 52636 8771276 0 0 29632 4 751 842 15 0\n> 84 1\n> 1 0 50376 2743160 52636 8843216 0 0 32832 36 817 910 16 0\n> 83 0\n> 1 0 50372 2655072 52644 8931240 0 0 40260 6 912 892 17 1\n> 82 1\n> 2 0 50356 2531112 52644 9055776 0 0 58560 0 1211 940 15 1\n> 84 0\n> 2 0 50356 2406352 52660 9180476 0 0 58600 30 1214 945 18 1\n> 80 1\n> 2 0 50356 2303256 52664 9284364 0 0 48512 55414 2022 883 15 1\n> 80 4\n> 1 0 50356 2174184 52664 9413440 0 0 60608 0 1240 941 17 1\n> 82 0\n> 1 0 50356 2091428 52672 9495784 0 0 37652 20 858 758 14 0\n> 86 0\n> 1 0 50356 2022020 52672 9565420 0 0 31296 0 748 694 16 1\n> 83 0\n> 1 0 50356 1953052 52680 9634684 0 0 31168 48 759 709 14 0\n> 86 0\n> 1 1 50356 1903932 52700 9682092 10 0 21024 0 632 813 11 0\n> 85 3\n> 1 0 50356 1862844 52700 9723072 0 0 20138 0 702 981 15 1\n> 78 6\n> 1 0 50356 1809516 52716 9776556 0 0 22400 34 619 783 15 0\n> 85 0\n> 1 0 50348 1755744 52716 9830160 0 0 22208 0 605 774 14 0\n> 86 0\n> 2 0 50336 1702324 52724 9884184 0 0 22400 6 600 773 16 0\n> 84 0\n> 1 0 50336 1648008 52732 9938304 0 0 22592 6 623 796 14 1\n> 85 0\n> 2 0 50336 1592896 52732 9993144 0 0 23424 36 658 861 15 0\n> 84 0\n> 1 0 50336 1547552 52744 10038416 0 0 19264 66 1045 980 12 0\n> 85 3\n> 2 0 50336 1490988 52744 10095964 0 0 26116 0 676 732 15 1\n> 80 3\n> 1 0 50336 1434456 52760 10151276 0 0 26600 32 697 758 13 1\n> 86 0\n> 1 2 50336 1408144 52768 10177992 0 0 11270 52636 1296 819 13 0\n> 82 5\n> 1 0 50336 1369920 52768 10216120 0 0 15872 3710 702 681 15 0\n> 84 1\n> 1 0 50336 1331664 52776 10254604 0 0 16000 18 501 687 16 0\n> 84 0\n> 1 0 50328 1293004 52776 10293184 0 0 15936 0 504 670 13 0\n> 87 0\n> 1 0 50268 1254560 52784 10331812 0 0 15936 18 512 680 16 0\n> 83 0\n> 1 0 50268 1216276 52784 10370156 0 0 15872 0 514 688 14 0\n> 86 0\n> 1 0 50252 1177600 52784 10408532 0 0 16000 0 502 672 15 0\n> 85 0\n> 1 0 50244 1138936 52796 10447412 0 0 16072 8 512 680 14 0\n> 86 0\n> 0 1 50220 1109028 52796 10476884 0 0 12608 18 423 566 11 0\n> 85 3\n> 0 1 50220 1109912 52804 10476520 0 0 704 10 100 196 1 0\n> 84 14\n> 0 1 50220 1107952 52800 10478324 0 0 1728 0 127 260 2 0\n> 88 10\n> 1 0 50220 1088676 52800 10497236 0 0 8832 36 336 513 7 0\n> 86 6\n> 1 0 50220 1038608 52808 10547152 0 0 20856 146 610 763 15 1\n> 84 0\n> 1 0 50220 985768 52808 10600160 0 0 22336 0 613 774 13 0\n> 86 0\n> 1 0 50220 933764 52824 10652336 0 0 21888 28 601 767 16 0\n> 84 0\n> 1 0 50220 883664 52824 10702216 0 0 20800 0 589 752 14 1\n> 86 0\n> 0 1 50220 879304 52808 10706896 0 0 3584 44682 928 335 4 0\n> 84 12\n> 0 2 50220 877780 52816 10707928 0 0 1408 16 110 188 2 0\n> 84 14\n> 0 1 50220 877156 52816 10709232 0 0 1472 0 112 196 2 0\n> 87 11\n> 0 1 50220 876540 52804 10710044 0 0 1408 20 119 213 1 0\n> 85 14\n> 0 1 50220 876104 52812 10710008 0 0 1024 6 117 234 1 0\n> 89 10\n> 0 1 50220 875736 52812 10710072 0 0 896 0 102 197 1 0\n> 84 15\n> 0 1 50220 875248 52820 10710856 0 0 1280 8 128 229 2 0\n> 85 13\n> 2 1 50220 874628 52820 10711556 0 0 1216 0 110 198 2 0\n> 88 10\n> 1 1 50220 873992 52820 10712288 0 0 1280 0 103 189 3 0\n> 83 14\n> 0 1 50220 875564 52824 10710784 0 0 384 6 113 207 1 0\n> 86 13\n> 0 1 50220 875460 52824 10710448 0 0 1216 36 142 263 2 0\n> 86 12\n> 1 1 50220 873852 52820 10712376 0 0 1856 0 143 255 3 0\n> 85 12\n> 0 1 50220 873752 52828 10712144 0 0 832 6 112 210 2 0\n> 86 12\n> 1 1 50220 873336 52832 10712884 0 0 1280 20 112 207 2 0\n> 86 12\n> 0 1 50220 872588 52840 10713684 0 0 1280 6 119 214 2 0\n> 87 11\n> 0 1 50220 872276 52840 10713944 0 0 1152 3884 181 184 2 0\n> 86 13\n> 0 1 50220 871672 52836 10714584 0 0 1088 4 91 169 1 0\n> 85 13\n> 0 1 50220 871504 52844 10714636 0 0 1088 14 116 200 2 0\n> 86 13\n> 1 1 50220 871040 52852 10715800 0 0 1280 34 118 217 2 0\n> 87 12\n> 0 1 50220 870308 52852 10716292 0 0 1088 0 115 204 1 0\n> 87 12\n> 0 1 50220 869972 52860 10716428 0 0 1216 6 103 190 2 0\n> 85 14\n> 0 1 50220 869492 52864 10716792 0 0 1024 2 98 179 2 0\n> 85 13\n> 0 1 50220 869372 52872 10716676 0 0 960 6 108 204 1 0\n> 88 11\n> 0 1 50220 869076 52872 10717008 0 0 1088 0 107 187 2 0\n> 84 14\n> 0 1 50220 868380 52864 10717668 0 0 1152 0 116 203 1 0\n> 88 11\n> 0 1 50220 867384 52872 10718000 0 0 1026 42 142 277 1 0\n> 87 12\n> 0 1 50220 866332 52872 10719644 0 0 1600 0 118 205 2 0\n> 83 14\n> 1 0 50220 866424 52880 10719784 0 0 1216 6 100 184 1 0\n> 86 12\n> 0 1 50220 865836 52880 10720324 0 0 1216 20 101 197 1 0\n> 87 12\n> 0 1 50220 865524 52880 10720592 0 0 960 0 113 207 1 0\n> 85 14\n>\n>\n>\n>\n>\n>\n\nCan it be hardware problem with io? Try finding out which file the stuck table is and do a simple fs copy. Or simply do a copy of the whole pg data directory.\n15 вер. 2013 04:54, \"Craig James\" <[email protected]> напис.\nOn Sat, Sep 14, 2013 at 11:36 AM, bricklen <[email protected]> wrote:\nOn Sat, Sep 14, 2013 at 11:28 AM, Craig James <[email protected]> wrote:\nI'm trying to do a pg_dump of a database, and it more-or-less just sits there doing nothing. \nWhat is running in the db? Perhaps there is something blocking the pg_dump? What does the output of the following query look like?\nselect * from pg_stat_activity where pid <> pg_backend_pid()\n=# select * from pg_stat_activity where pid <> pg_backend_pid(); datid  |  datname   |  pid  | usesysid | usename  | application_name | client_addr | client_hostname | client_port |         backend_start        \n\n |          xact_start           |         query_start          |         state_change          | waiting | state  |                                                 query                                                  \n\n--------+------------+-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+------------------------------+-------------------------------+---------+--------+-------------------------------\n\n------------------------------------------------------------------------- 231308 | emolecules | 13312 |       10 | postgres | pg_dump          |             |                 |          -1 | 2013-09-14 18:37:08.752938-07\n\n | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:39:43.74618-07 | 2013-09-14 18:39:43.746181-07 | f       | active | COPY orders.chmoogle_thesaurus (thesaurus_id, version_id, normalized, identifier, typecode) TO stdout;\n\n(1 row)And a bit later: 231308 | emolecules | 13312 |       10 | postgres | pg_dump          |             |                 |          -1 | 2013-09-14 18:37:08.752938-07\n\n | 2013-09-14 18:37:08.783782-07 | 2013-09-14 18:47:38.287109-07 | 2013-09-14 18:47:38.287111-07 | f       | active | COPY orders.customer_order_items (customer_order_item_id, customer_order_id, orig_smiles, orig_sdf, orig_datatype, orig_catalog_num, orig_rownum, cansmiles, version_id, version_smiles, parent_id, match_type, catalogue_id, supplier_id, sample_id, catalog_num, price_code, reason, discount, format_ordered, amount_ordered, units_ordered, format_quoted, amount_quoted, units_quoted, price_quoted, wholesale, nitems_shipped, nitems_analytical, comment, salt_name, salt_ratio, original_order_id, price_quoted_usd, wholesale_usd, invoice_price, hazardous) TO stdout;\n\n(1 row)The Apache web server is shut off, and I upgraded to Postgres 9.2.4 since my first email.top(1) reports nothing interesting that I can see:\ntop - 18:50:18 up 340 days,  3:28,  4 users,  load average: 1.46, 1.40, 1.17Tasks: 282 total,   1 running, 281 sleeping,   0 stopped,   0 zombie\nCpu(s):  0.2%us,  0.1%sy,  0.0%ni, 86.9%id, 12.7%wa,  0.0%hi,  0.0%si,  0.0%stMem:  12322340k total, 11465028k used,   857312k free,    53028k buffers\nSwap: 19796984k total,    50224k used, 19746760k free, 10724856k cached  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n13311 emi       20   0 28476 8324 1344 S    1  0.1   3:48.90 pg_dump13359 emi       20   0 19368 1576 1076 R    0  0.0   0:00.09 top\n    1 root      20   0 23840 1372  688 S    0  0.0   0:03.13 init    2 root      20   0     0    0    0 S    0  0.0   0:00.04 kthreadd   \n    3 root      RT   0     0    0    0 S    0  0.0   0:00.27 migration/0    4 root      20   0     0    0    0 S    0  0.0   0:05.85 ksoftirqd/0\n... etc.\n\nInterestingly, it starts off going fairly fast according to \"vmstat 2\".  This was started almost immediately after pg_dump started.  Notice that it goes well for a couple minutes, slows for a bit, speeds up, then drops to almost nothing.  It stays that way forever, just not doing anything.  See below.\nThanks,Craigprocs  -----------memory----------  ---swap-- -----io---- -system-- ----cpu----\n\n r  b   swpd    free   buff   cache   si   so    bi    bo   in   cs us sy id wa 1  0  50492 4129584  53672 7455892    0    0    46    37    0    0  0  0 99  0 1  0  50488 4091536  53672 7494072    0    0 15744     0  389  717 12  0 87  0\n\n 3  0  50480 4053868  53672 7531836    0    0 15680    36  408  759 17  0 83  0 1  0  50480 4016252  53680 7569636    0    0 15580     6  379  700 12  0 87  0 2  0  50480 3979140  53680 7606800    0    0 15360     0  372  682 17  0 82  0\n\n 1  1  50480 3943536  53696 7642740    0    0 14720 68846 2076 1343 11  1 81  7 1  0  50448 3906040  53700 7680044    0    0 15488     2  456  760 18  0 80  1 1  0  50424 3866036  53704 7720436    0    0 16818     0  389  713 13  1 86  0\n\n 1  0  50424 3813564  53712 7772656    0    0 21952    16  424  772 18  1 81  0 1  0  50424 3762460  53712 7823840    0    0 21376     0  414  761 13  0 87  0 1  0  50424 3710416  53720 7875968    0    0 22080     6  420  777 18  0 82  0\n\n 1  0  50424 3660616  53720 7925828    0    0 20672     0  412  777 11  0 88  0 1  0  50424 3611208  53720 7975484    0    0 20608     0  411  753 17  0 83  0 1  0  50424 3566916  53728 8019580    0    0 18176    14  406  743 14  0 86  0\n\n 1  0  50424 3514708  53728 8071780    0    0 21760     0  427  791 17  1 83  0 0  1  50424 3478924  53736 8107700    0    0 15360     6  323  592  9  0 87  3 0  1  50424 3478636  53736 8108040    0    0   960     0   95  164  2  0 81 17\n\n 2  0  50424 3456340  52604 8130768    0    0 10818    36  365  532  6  0 89  5 1  0  50424 3413248  52580 8173744    0    0 21504     6  612  814 19  1 81  0 2  0  50424 3376956  52580 8210000    0    0 18368     0  554  759 15  0 85  0\n\n 2  1  50424 3338208  52600 8249304    0    0 19456 21360  848  800 18  0 81  0 2  0  50424 3304936  52600 8283196    0    0 16896 31532 1238  759 13  0 84  3 2  0  50424 3265300  52600 8323020    0    0 19776     0  581  780 18  0 82  0\n\n 3  0  50424 3218968  52608 8369276    0    0 23104    18  633  843 16  1 83  0 1  0  50424 3177500  52608 8410292    0    0 20544     0  583  761 15  0 85  0 2  0  50424 3133868  52608 8453808    0    0 21760    22  609  807 16  1 83  0\n\n 1  0  50424 3094220  52608 8493780    0    0 19840     0  576  765 18  1 81  0 2  0  50424 3054492  52608 8533436    0    0 19712     0  564  748 14  0 85  0 1  0  50424 3015664  52624 8572184    0    0 19264    12  562  755 18  0 82  0\n\n 0  1  50380 2946524  52628 8641404    0    0 34734     0  856 1035 15  1 83  1 2  0  50380 2882988  52636 8704484    0    0 28784    10  757  853 15  0 84  1 1  0  50376 2816264  52636 8771276    0    0 29632     4  751  842 15  0 84  1\n\n 1  0  50376 2743160  52636 8843216    0    0 32832    36  817  910 16  0 83  0 1  0  50372 2655072  52644 8931240    0    0 40260     6  912  892 17  1 82  1 2  0  50356 2531112  52644 9055776    0    0 58560     0 1211  940 15  1 84  0\n\n 2  0  50356 2406352  52660 9180476    0    0 58600    30 1214  945 18  1 80  1 2  0  50356 2303256  52664 9284364    0    0 48512 55414 2022  883 15  1 80  4 1  0  50356 2174184  52664 9413440    0    0 60608     0 1240  941 17  1 82  0\n\n 1  0  50356 2091428  52672 9495784    0    0 37652    20  858  758 14  0 86  0 1  0  50356 2022020  52672 9565420    0    0 31296     0  748  694 16  1 83  0 1  0  50356 1953052  52680 9634684    0    0 31168    48  759  709 14  0 86  0\n\n 1  1  50356 1903932  52700 9682092   10    0 21024     0  632  813 11  0 85  3 1  0  50356 1862844  52700 9723072    0    0 20138     0  702  981 15  1 78  6 1  0  50356 1809516  52716 9776556    0    0 22400    34  619  783 15  0 85  0\n\n 1  0  50348 1755744  52716 9830160    0    0 22208     0  605  774 14  0 86  0 2  0  50336 1702324  52724 9884184    0    0 22400     6  600  773 16  0 84  0 1  0  50336 1648008  52732 9938304    0    0 22592     6  623  796 14  1 85  0\n\n 2  0  50336 1592896  52732 9993144    0    0 23424    36  658  861 15  0 84  0 1  0  50336 1547552  52744 10038416    0    0 19264    66 1045  980 12  0 85  3 2  0  50336 1490988  52744 10095964    0    0 26116     0  676  732 15  1 80  3\n\n 1  0  50336 1434456  52760 10151276    0    0 26600    32  697  758 13  1 86  0 1  2  50336 1408144  52768 10177992    0    0 11270 52636 1296  819 13  0 82  5 1  0  50336 1369920  52768 10216120    0    0 15872  3710  702  681 15  0 84  1\n\n 1  0  50336 1331664  52776 10254604    0    0 16000    18  501  687 16  0 84  0 1  0  50328 1293004  52776 10293184    0    0 15936     0  504  670 13  0 87  0 1  0  50268 1254560  52784 10331812    0    0 15936    18  512  680 16  0 83  0\n\n 1  0  50268 1216276  52784 10370156    0    0 15872     0  514  688 14  0 86  0 1  0  50252 1177600  52784 10408532    0    0 16000     0  502  672 15  0 85  0 1  0  50244 1138936  52796 10447412    0    0 16072     8  512  680 14  0 86  0\n\n 0  1  50220 1109028  52796 10476884    0    0 12608    18  423  566 11  0 85  3 0  1  50220 1109912  52804 10476520    0    0   704    10  100  196  1  0 84 14 0  1  50220 1107952  52800 10478324    0    0  1728     0  127  260  2  0 88 10\n\n 1  0  50220 1088676  52800 10497236    0    0  8832    36  336  513  7  0 86  6 1  0  50220 1038608  52808 10547152    0    0 20856   146  610  763 15  1 84  0 1  0  50220 985768  52808 10600160    0    0 22336     0  613  774 13  0 86  0\n\n 1  0  50220 933764  52824 10652336    0    0 21888    28  601  767 16  0 84  0 1  0  50220 883664  52824 10702216    0    0 20800     0  589  752 14  1 86  0 0  1  50220 879304  52808 10706896    0    0  3584 44682  928  335  4  0 84 12\n\n 0  2  50220 877780  52816 10707928    0    0  1408    16  110  188  2  0 84 14 0  1  50220 877156  52816 10709232    0    0  1472     0  112  196  2  0 87 11\n 0  1  50220 876540  52804 10710044    0    0  1408    20  119  213  1  0 85 14\n 0  1  50220 876104  52812 10710008    0    0  1024     6  117  234  1  0 89 10 0  1  50220 875736  52812 10710072    0    0   896     0  102  197  1  0 84 15 0  1  50220 875248  52820 10710856    0    0  1280     8  128  229  2  0 85 13\n\n 2  1  50220 874628  52820 10711556    0    0  1216     0  110  198  2  0 88 10 1  1  50220 873992  52820 10712288    0    0  1280     0  103  189  3  0 83 14 0  1  50220 875564  52824 10710784    0    0   384     6  113  207  1  0 86 13\n\n 0  1  50220 875460  52824 10710448    0    0  1216    36  142  263  2  0 86 12 1  1  50220 873852  52820 10712376    0    0  1856     0  143  255  3  0 85 12 0  1  50220 873752  52828 10712144    0    0   832     6  112  210  2  0 86 12\n\n 1  1  50220 873336  52832 10712884    0    0  1280    20  112  207  2  0 86 12 0  1  50220 872588  52840 10713684    0    0  1280     6  119  214  2  0 87 11 0  1  50220 872276  52840 10713944    0    0  1152  3884  181  184  2  0 86 13\n\n 0  1  50220 871672  52836 10714584    0    0  1088     4   91  169  1  0 85 13 0  1  50220 871504  52844 10714636    0    0  1088    14  116  200  2  0 86 13 1  1  50220 871040  52852 10715800    0    0  1280    34  118  217  2  0 87 12\n\n 0  1  50220 870308  52852 10716292    0    0  1088     0  115  204  1  0 87 12 0  1  50220 869972  52860 10716428    0    0  1216     6  103  190  2  0 85 14\n 0  1  50220 869492  52864 10716792    0    0  1024     2   98  179  2  0 85 13\n 0  1  50220 869372  52872 10716676    0    0   960     6  108  204  1  0 88 11 0  1  50220 869076  52872 10717008    0    0  1088     0  107  187  2  0 84 14 0  1  50220 868380  52864 10717668    0    0  1152     0  116  203  1  0 88 11\n\n 0  1  50220 867384  52872 10718000    0    0  1026    42  142  277  1  0 87 12 0  1  50220 866332  52872 10719644    0    0  1600     0  118  205  2  0 83 14 1  0  50220 866424  52880 10719784    0    0  1216     6  100  184  1  0 86 12\n\n 0  1  50220 865836  52880 10720324    0    0  1216    20  101  197  1  0 87 12 0  1  50220 865524  52880 10720592    0    0   960     0  113  207  1  0 85 14", "msg_date": "Sun, 15 Sep 2013 10:50:40 +0300", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow server?" }, { "msg_contents": "On Sat, Sep 14, 2013 at 7:06 PM, bricklen <[email protected]> wrote:\n\n>\n>\n> I don't have any solutions at the moment, but three things come to mind:\n>\n> 1). Try without \"--blobs\",\n> 2). Does \"strace -p <pid of pg_dump process>\" show anything unusual?\n> Futex? Lots of semops?\n>\n\nHe probably needs to find the pid of the backend to which pg_dump is\nconnected (such as from pg_stat_activity), and strace that rather than\nstracing pg_dump itself.\n\nCheers,\n\nJeff\n\nOn Sat, Sep 14, 2013 at 7:06 PM, bricklen <[email protected]> wrote:\nI don't have any solutions at the moment, but three things come to mind:\n1). Try without \"--blobs\",\n2). Does \"strace -p <pid of pg_dump process>\" show anything unusual? Futex? Lots of semops?He probably needs to find the pid of the backend to which pg_dump is connected (such as from pg_stat_activity), and strace that rather than stracing pg_dump itself.\nCheers,Jeff", "msg_date": "Sun, 15 Sep 2013 13:55:45 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow server?" } ]
[ { "msg_contents": "I'm in the process of taking the next leap in performance optimization of our database, I just need some good advice on my journey. I posted the full question with images here on stackexchange if someone would be interested in commenting / answering it would be great!\n\nRegards Niels Kristian\n\nhttp://dba.stackexchange.com/questions/49984/how-to-optimization-database-for-heavy-i-o-from-updates-software-and-hardware\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 16 Sep 2013 17:53:27 +0200", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "How to optimization database for heavy I/O from updates (software and\n hardware)" } ]
[ { "msg_contents": "> What's more doing similar insert (even much larger) to the 'offers' table\ndoes not affect the benchmark results in any significant way...\n\nJust want clarify myself here: Insert to 'offers' table does not cause the\nslowdown. Only insert to 'categories' causes the problem.\n\n> What's more doing similar insert (even much larger) to the 'offers' table does not affect the benchmark results in any significant way...\nJust want clarify myself here: Insert to 'offers' table does not cause the slowdown. Only insert to 'categories' causes the problem.", "msg_date": "Wed, 18 Sep 2013 00:16:49 +0200", "msg_from": "=?ISO-8859-2?Q?Bart=B3omiej_Roma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner performance extremely affected by an hanging transaction\n (20-30 times)?" } ]
[ { "msg_contents": "We are running PostgreSQL 9.1.6 with autovacuum = on and I am reporting on dead tuples using the pgstattuple extension. Each time I run the pgstattuple package our dead tuples counts decrease. My colleague is under the impression that dead tuples are only cleaned up via vacuum full only, while I suggested that the autovaccum process was cleaning up these dead tuples. Is this true?Thanks\n", "msg_date": "Wed, 18 Sep 2013 08:42:42 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum and dead tuples" }, { "msg_contents": "On 09/18/2013 10:42 AM, [email protected] wrote:\n\n> My colleague is under the impression that dead tuples are only cleaned up via vacuum\n> full only, while I suggested that the autovaccum process was cleaning up\n> these dead tuples. Is this true?\n\nYou are correct. Only VACUUM FULL (or CLUSTER) physically removes dead \ntuples from the table, but a regular VACUUM enters them into the free \nspace map for reuse, so they wouldn't show up in the dead_tuple_count \ncolumn in pgstattuple. It's possible your colleague was confused by the \nphysical removal versus reassignment.\n\nKeep in mind that the dead tuples are still in the table, but reusable. \nThe free_space column and free_percent is a better description of table \nbloat from data turnover cleaned up by autovacuum.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 18 Sep 2013 11:27:35 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum and dead tuples" } ]
[ { "msg_contents": "Test:\n\n1. create a table with a range type column.\n2. insert 1000 identical values into that column.\n3. analyze\n4. n-distinct will still be listed as -1 (unique) for the column.\n\nWhy?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Sep 2013 15:47:11 -0500", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Why is n_distinct always -1 for range types?" }, { "msg_contents": "On 09/19/2013 01:47 PM, Josh Berkus wrote:\n> Test:\n> \n> 1. create a table with a range type column.\n> 2. insert 1000 identical values into that column.\n> 3. analyze\n> 4. n-distinct will still be listed as -1 (unique) for the column.\n> \n> Why?\n> \n\nAnyone?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 10:29:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is n_distinct always -1 for range types?" }, { "msg_contents": "On Thu, Sep 19, 2013 at 1:47 PM, Josh Berkus <[email protected]> wrote:\n> 4. n-distinct will still be listed as -1 (unique) for the column.\n>\n> Why?\n\n\nBecause of this:\n\nhttps://github.com/postgres/postgres/blob/master/src/backend/utils/adt/rangetypes_typanalyze.c#L205\n\nWe only collect and use histograms of lower and upper bounds for range\ntypes, and the fraction of empty ranges.\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 10:53:17 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is n_distinct always -1 for range types?" } ]
[ { "msg_contents": "Hi all\n\nWe're experiencing a very strange performance issue. Our setup is a bit\nmore complicated, but we've managed to isolate and replicate the core\nproblem. Here's what we observe:\n\nWe took a strong machine (128 GB RAM, 8-core CPU, SSD drives...) and\ninstalled a fresh copy of PostgreSQL 9.2 (Ubuntu Server 12.04 LTS, default\nconfiguration).\n\nThen, we created a test database with the following schema:\n\nCREATE TABLE offers\n(\n id bigserial NOT NULL,\n name character varying NOT NULL,\n category_id bigint NOT NULL,\n CONSTRAINT offers_pkey PRIMARY KEY (id)\n);\n\nCREATE TABLE categories\n(\n id bigserial NOT NULL,\n name character varying NOT NULL,\n CONSTRAINT categories_pkey PRIMARY KEY (id)\n);\n\nand populated it with in the following way:\n\ninsert into categories (name) select 'category_' || x from\ngenerate_series(1,1000) as x;\ninsert into offers (name, category_id) select 'offer_' || x, floor(random()\n* 1000) + 1 from generate_series(1,1000*1000) as x;\n\nFinally, we created a python script to make simple queries in a loop:\n\nwhile True:\n id = random.randrange(1, 1000 * 1000)\n db.execute('select offers.id, offers.name, categories.id,\ncategories.name from offers left join categories on categories.id =\noffers.category_id where offers.id = %s', (id,))\n print db.fetchall()\n\nWe start 20 instances simultaneously and measure performance:\n\nparallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\n\nNormally we observe about 30k QPS what's a satisfying result (without any\ntuning at all).\n\nThe problem occurs when we open a second console, start psql and type:\n\npgtest=> begin; insert into categories (name) select 'category_' || x from\ngenerate_series(1,1000) as x;\n\nWe start a transaction and insert 1k records to the 'categories' table.\nAfter that performance of the benchmark described above immediately drops\nto about 1-2k QPS. That's 20-30 times! After closing the transaction\n(committing or aborting - doesn't matter) it immediately jumps back to 30k\nQPS.\n\nRestarting the running script and other simple tricks do not change\nanything. The hanging, open transaction is causing a huge slowdown. What's\nmore when doing similar insert (even much larger) to the 'offers' table we\ndo not observe this effect.\n\nWe analyzed the problem a bit deeper and find out that the actual query\nexecution times are not affected that much. They are constantly close to\n0.5 ms. This can be observed in a server log (after enabling appropriate\noption) and this can be found in 'explain analyze...' result. Also the\nquery plan returned do not change and looks optimal (pkey scan for 'offers'\n+ pkey scan for 'categories').\n\nAfter a few random thought we've finally turned the 'log_planner_stats'\noption and found out that the planner executions times are highly affected\nby the hanging transaction. Here's the typical output in the initial\nsituation:\n\n2013-09-17 21:54:59 UTC LOG: PLANNER STATISTICS\n2013-09-17 21:54:59 UTC DETAIL: ! system usage stats:\n ! 0.000137 elapsed 0.000000 user 0.000000 system sec\n ! [2.169670 user 0.383941 sys total]\n ! 0/0 [0/11520] filesystem blocks in/out\n ! 0/0 [0/7408] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 0/0 [1362/7648] voluntary/involuntary context switches\n\nAnd here's a typical output with a hanging transaction:\n\n2013-09-17 21:56:12 UTC LOG: PLANNER STATISTICS\n2013-09-17 21:56:12 UTC DETAIL: ! system usage stats:\n ! 0.027251 elapsed 0.008999 user 0.001000 system sec\n ! [32.722025 user 3.550460 sys total]\n ! 0/0 [0/115128] filesystem blocks in/out\n ! 0/0 [0/7482] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 1/6 [12817/80202] voluntary/involuntary context switches\n\nAs you can see it takes over 100 times more time when the extra transaction\nis open!\n\nAny ideas why's that? It extremely affects total query time.\n\nI know that using prepared statements to solves the problem completely, so\nwe can deal with it, but we've already spend a lot of time to go that far\nand I'm personally a bit curious what's the fundamental reason for such a\nbehavior.\n\nI'll be very thankful for any explanation what's going on here!\n\nThanks,\nBR\n\nHi allWe're experiencing a very strange performance issue. Our setup is a bit more complicated, but we've managed to isolate and replicate the core problem. Here's what we observe:\nWe took a strong machine (128 GB RAM, 8-core CPU, SSD drives...) and installed a fresh copy of PostgreSQL 9.2 (Ubuntu Server 12.04 LTS, default configuration).\nThen, we created a test database with the following schema:\nCREATE TABLE offers(  id bigserial NOT NULL,  name character varying NOT NULL,  category_id bigint NOT NULL,\n  CONSTRAINT offers_pkey PRIMARY KEY (id));CREATE TABLE categories\n(  id bigserial NOT NULL,  name character varying NOT NULL,  CONSTRAINT categories_pkey PRIMARY KEY (id));\nand populated it with in the following way:\ninsert into categories (name) select 'category_' || x from generate_series(1,1000) as x;insert into offers (name, category_id) select 'offer_' || x, floor(random() * 1000) + 1 from generate_series(1,1000*1000) as x;\nFinally, we created a python script to make simple queries in a loop:\nwhile True:    id = random.randrange(1, 1000 * 1000)    db.execute('select offers.id, offers.name, categories.id, categories.name from offers left join categories on categories.id = offers.category_id where offers.id = %s', (id,))\n    print db.fetchall()We start 20 instances simultaneously and measure performance:\nparallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\nNormally we observe about 30k QPS what's a satisfying result (without any tuning at all).\nThe problem occurs when we open a second console, start psql and type:\n\npgtest=> begin; insert into categories (name) select 'category_' || x from generate_series(1,1000) as x;\n\nWe start a transaction and insert 1k records to the 'categories' table. After that performance of the benchmark described above immediately drops to about 1-2k QPS. That's 20-30 times! After closing the transaction (committing or aborting - doesn't matter) it immediately jumps back to 30k QPS.\nRestarting the running script and other simple tricks do not change anything. The hanging, open transaction is causing a huge slowdown. What's more when doing similar insert (even much larger) to the 'offers' table we do not observe this effect.\nWe analyzed the problem a bit deeper and find out that the actual query execution times are not affected that much. They are constantly close to 0.5 ms. This can be observed in a server log (after enabling appropriate option) and this can be found in 'explain analyze...' result. Also the query plan returned do not change and looks optimal (pkey scan for 'offers' + pkey scan for 'categories').\nAfter a few random thought we've finally turned the 'log_planner_stats' option and found out that the planner executions times are highly affected by the hanging transaction. Here's the typical output in the initial situation:\n2013-09-17 21:54:59 UTC LOG:  PLANNER STATISTICS2013-09-17 21:54:59 UTC DETAIL:  ! system usage stats:\n        !       0.000137 elapsed 0.000000 user 0.000000 system sec        !       [2.169670 user 0.383941 sys total]        !       0/0 [0/11520] filesystem blocks in/out        !       0/0 [0/7408] page faults/reclaims, 0 [0] swaps\n        !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent        !       0/0 [1362/7648] voluntary/involuntary context switches\nAnd here's a typical output with a hanging transaction:\n2013-09-17 21:56:12 UTC LOG:  PLANNER STATISTICS2013-09-17 21:56:12 UTC DETAIL:  ! system usage stats:        !       0.027251 elapsed 0.008999 user 0.001000 system sec        !       [32.722025 user 3.550460 sys total]\n        !       0/0 [0/115128] filesystem blocks in/out        !       0/0 [0/7482] page faults/reclaims, 0 [0] swaps        !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent        !       1/6 [12817/80202] voluntary/involuntary context switches\nAs you can see it takes over 100 times more time when the extra transaction is open!\n\nAny ideas why's that? It extremely affects total query time. \n\nI know that using prepared statements to solves the problem completely, so we can deal with it, but we've already spend a lot of time to go that far and I'm personally a bit curious what's the fundamental reason for such a behavior.\nI'll be very thankful for any explanation what's going on here!\nThanks,BR", "msg_date": "Fri, 20 Sep 2013 02:49:18 +0200", "msg_from": "=?ISO-8859-2?Q?Bart=B3omiej_Roma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Planner performance extremely affected by an hanging transaction\n (20-30 times)?" }, { "msg_contents": "Hi Bart,\n\nYou are doing heavy random reads in addition to a 1000k row insert\nwithin a single transaction.\n\nAlso it is not clear whether your query within the python loop is itself\nwithin a transaction. (i.e. pyscopg2.connection.autocommit to True,\ndisables transactional queries).\n\nDepending on your pg adapter, it may open a transaction by default even\nthough it may not be required.\n\nSo please clarify whether this is the case.\n\nRegards,\n\nJulian.\n\nOn 20/09/13 10:49, Bartłomiej Romański wrote:\n> Hi all\n>\n> We're experiencing a very strange performance issue. Our setup is a\n> bit more complicated, but we've managed to isolate and replicate the\n> core problem. Here's what we observe:\n>\n> We took a strong machine (128 GB RAM, 8-core CPU, SSD drives...) and\n> installed a fresh copy of PostgreSQL 9.2 (Ubuntu Server 12.04 LTS,\n> default configuration).\n>\n> Then, we created a test database with the following schema:\n>\n> CREATE TABLE offers\n> (\n> id bigserial NOT NULL,\n> name character varying NOT NULL,\n> category_id bigint NOT NULL,\n> CONSTRAINT offers_pkey PRIMARY KEY (id)\n> );\n>\n> CREATE TABLE categories\n> (\n> id bigserial NOT NULL,\n> name character varying NOT NULL,\n> CONSTRAINT categories_pkey PRIMARY KEY (id)\n> );\n>\n> and populated it with in the following way:\n>\n> insert into categories (name) select 'category_' || x from\n> generate_series(1,1000) as x;\n> insert into offers (name, category_id) select 'offer_' || x,\n> floor(random() * 1000) + 1 from generate_series(1,1000*1000) as x;\n>\n> Finally, we created a python script to make simple queries in a loop:\n>\n> while True:\n> id = random.randrange(1, 1000 * 1000)\n> db.execute('select offers.id <http://offers.id/>, offers.name\n> <http://offers.name/>, categories.id <http://categories.id/>,\n> categories.name <http://categories.name/> from offers left join\n> categories on categories.id <http://categories.id/> =\n> offers.category_id where offers.id <http://offers.id/> = %s', (id,))\n> print db.fetchall()\n>\n> We start 20 instances simultaneously and measure performance:\n>\n> parallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\n>\n> Normally we observe about 30k QPS what's a satisfying result (without\n> any tuning at all).\n>\n> The problem occurs when we open a second console, start psql and type:\n>\n> pgtest=> begin; insert into categories (name) select 'category_' || x\n> from generate_series(1,1000) as x;\n>\n> We start a transaction and insert 1k records to the 'categories'\n> table. After that performance of the benchmark described above\n> immediately drops to about 1-2k QPS. That's 20-30 times! After closing\n> the transaction (committing or aborting - doesn't matter) it\n> immediately jumps back to 30k QPS.\n>\n> Restarting the running script and other simple tricks do not change\n> anything. The hanging, open transaction is causing a huge slowdown.\n> What's more when doing similar insert (even much larger) to the\n> 'offers' table we do not observe this effect.\n>\n> We analyzed the problem a bit deeper and find out that the actual\n> query execution times are not affected that much. They are constantly\n> close to 0.5 ms. This can be observed in a server log (after enabling\n> appropriate option) and this can be found in 'explain analyze...'\n> result. Also the query plan returned do not change and looks optimal\n> (pkey scan for 'offers' + pkey scan for 'categories').\n>\n> After a few random thought we've finally turned the\n> 'log_planner_stats' option and found out that the planner executions\n> times are highly affected by the hanging transaction. Here's the\n> typical output in the initial situation:\n>\n> 2013-09-17 21:54:59 UTC LOG: PLANNER STATISTICS\n> 2013-09-17 21:54:59 UTC DETAIL: ! system usage stats:\n> ! 0.000137 elapsed 0.000000 user 0.000000 system sec\n> ! [2.169670 user 0.383941 sys total]\n> ! 0/0 [0/11520] filesystem blocks in/out\n> ! 0/0 [0/7408] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/0 [1362/7648] voluntary/involuntary context switches\n>\n> And here's a typical output with a hanging transaction:\n>\n> 2013-09-17 21:56:12 UTC LOG: PLANNER STATISTICS\n> 2013-09-17 21:56:12 UTC DETAIL: ! system usage stats:\n> ! 0.027251 elapsed 0.008999 user 0.001000 system sec\n> ! [32.722025 user 3.550460 sys total]\n> ! 0/0 [0/115128] filesystem blocks in/out\n> ! 0/0 [0/7482] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 1/6 [12817/80202] voluntary/involuntary context switches\n>\n> As you can see it takes over 100 times more time when the extra\n> transaction is open!\n>\n> Any ideas why's that? It extremely affects total query time.\n>\n> I know that using prepared statements to solves the problem\n> completely, so we can deal with it, but we've already spend a lot of\n> time to go that far and I'm personally a bit curious what's the\n> fundamental reason for such a behavior.\n>\n> I'll be very thankful for any explanation what's going on here!\n>\n> Thanks,\n> BR\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Sep 2013 11:42:45 +1000", "msg_from": "Julian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "Hi Julian,\n\nHere's my complete python script:\n\nimport psycopg2\nimport random\nimport math\nimport time\n\nconnection = psycopg2.connect('host=localhost dbname=pgtest user=pgtest\npassword=pgtest')\ncursor = connection.cursor()\n\nwhile True:\n id = random.randrange(1, 1000 * 1000)\n cursor.execute('select offers.id, offers.name, categories.id,\ncategories.name from offers left join categories on categories.id =\noffers.category_id where offers.id = %s', (id,))\n print cursor.fetchall()\n\nSo I assume that each of 20 instances opens and uses a single transaction.\nI've just tested the options with connection.autocommit = True at the\nbegging, but it does not change anything. Also in production (where we\nfirst noticed the problem) we use a new transaction for every select.\n\nI start 20 instances of this python script (I use pv to measure\nperformance):\n\nparallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\n\nAnd then in a second console, I open psql and type:\n\npgtest=> begin; insert into categories (name) select 'category_' || x from\ngenerate_series(1,1000) as x;\n\nThe QPS displayed be the first command drops immediately 20-30 times and\nstays low as long as the transaction with insert is open.\n\nHere's the script that I use to initiate the database. You should be able\nto replicate the situation easily yourself if you want.\n\ndrop database pgtest;\ndrop user pgtest;\ncreate user pgtest with password 'pgtest';\ncreate database pgtest;\ngrant all privileges on database pgtest to pgtest;\n\\connect pgtest\nCREATE TABLE categories\n(\n id bigserial NOT NULL,\n name character varying NOT NULL,\n CONSTRAINT categories_pkey PRIMARY KEY (id)\n);\nCREATE TABLE offers\n(\n id bigserial NOT NULL,\n name character varying NOT NULL,\n category_id bigint NOT NULL,\n CONSTRAINT offers_pkey PRIMARY KEY (id)\n);\ninsert into categories (name) select 'category_' || x from\ngenerate_series(1,1000) as x;\ninsert into offers (name, category_id) select 'offer_' || x, floor(random()\n* 1000) + 1 from generate_series(1,1000*1000) as x;\nalter table categories owner to pgtest;\nalter table offers owner to pgtest;\n\nThanks,\nBartek\n\n\n\n\nOn Fri, Sep 20, 2013 at 3:42 AM, Julian <[email protected]> wrote:\n\n> Hi Bart,\n>\n> You are doing heavy random reads in addition to a 1000k row insert\n> within a single transaction.\n>\n> Also it is not clear whether your query within the python loop is itself\n> within a transaction. (i.e. pyscopg2.connection.autocommit to True,\n> disables transactional queries).\n>\n> Depending on your pg adapter, it may open a transaction by default even\n> though it may not be required.\n>\n> So please clarify whether this is the case.\n>\n> Regards,\n>\n> Julian.\n>\n> On 20/09/13 10:49, Bartłomiej Romański wrote:\n> > Hi all\n> >\n> > We're experiencing a very strange performance issue. Our setup is a\n> > bit more complicated, but we've managed to isolate and replicate the\n> > core problem. Here's what we observe:\n> >\n> > We took a strong machine (128 GB RAM, 8-core CPU, SSD drives...) and\n> > installed a fresh copy of PostgreSQL 9.2 (Ubuntu Server 12.04 LTS,\n> > default configuration).\n> >\n> > Then, we created a test database with the following schema:\n> >\n> > CREATE TABLE offers\n> > (\n> > id bigserial NOT NULL,\n> > name character varying NOT NULL,\n> > category_id bigint NOT NULL,\n> > CONSTRAINT offers_pkey PRIMARY KEY (id)\n> > );\n> >\n> > CREATE TABLE categories\n> > (\n> > id bigserial NOT NULL,\n> > name character varying NOT NULL,\n> > CONSTRAINT categories_pkey PRIMARY KEY (id)\n> > );\n> >\n> > and populated it with in the following way:\n> >\n> > insert into categories (name) select 'category_' || x from\n> > generate_series(1,1000) as x;\n> > insert into offers (name, category_id) select 'offer_' || x,\n> > floor(random() * 1000) + 1 from generate_series(1,1000*1000) as x;\n> >\n> > Finally, we created a python script to make simple queries in a loop:\n> >\n> > while True:\n> > id = random.randrange(1, 1000 * 1000)\n> > db.execute('select offers.id <http://offers.id/>, offers.name\n> > <http://offers.name/>, categories.id <http://categories.id/>,\n> > categories.name <http://categories.name/> from offers left join\n> > categories on categories.id <http://categories.id/> =\n> > offers.category_id where offers.id <http://offers.id/> = %s', (id,))\n> > print db.fetchall()\n> >\n> > We start 20 instances simultaneously and measure performance:\n> >\n> > parallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\n> >\n> > Normally we observe about 30k QPS what's a satisfying result (without\n> > any tuning at all).\n> >\n> > The problem occurs when we open a second console, start psql and type:\n> >\n> > pgtest=> begin; insert into categories (name) select 'category_' || x\n> > from generate_series(1,1000) as x;\n> >\n> > We start a transaction and insert 1k records to the 'categories'\n> > table. After that performance of the benchmark described above\n> > immediately drops to about 1-2k QPS. That's 20-30 times! After closing\n> > the transaction (committing or aborting - doesn't matter) it\n> > immediately jumps back to 30k QPS.\n> >\n> > Restarting the running script and other simple tricks do not change\n> > anything. The hanging, open transaction is causing a huge slowdown.\n> > What's more when doing similar insert (even much larger) to the\n> > 'offers' table we do not observe this effect.\n> >\n> > We analyzed the problem a bit deeper and find out that the actual\n> > query execution times are not affected that much. They are constantly\n> > close to 0.5 ms. This can be observed in a server log (after enabling\n> > appropriate option) and this can be found in 'explain analyze...'\n> > result. Also the query plan returned do not change and looks optimal\n> > (pkey scan for 'offers' + pkey scan for 'categories').\n> >\n> > After a few random thought we've finally turned the\n> > 'log_planner_stats' option and found out that the planner executions\n> > times are highly affected by the hanging transaction. Here's the\n> > typical output in the initial situation:\n> >\n> > 2013-09-17 21:54:59 UTC LOG: PLANNER STATISTICS\n> > 2013-09-17 21:54:59 UTC DETAIL: ! system usage stats:\n> > ! 0.000137 elapsed 0.000000 user 0.000000 system sec\n> > ! [2.169670 user 0.383941 sys total]\n> > ! 0/0 [0/11520] filesystem blocks in/out\n> > ! 0/0 [0/7408] page faults/reclaims, 0 [0] swaps\n> > ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> > ! 0/0 [1362/7648] voluntary/involuntary context switches\n> >\n> > And here's a typical output with a hanging transaction:\n> >\n> > 2013-09-17 21:56:12 UTC LOG: PLANNER STATISTICS\n> > 2013-09-17 21:56:12 UTC DETAIL: ! system usage stats:\n> > ! 0.027251 elapsed 0.008999 user 0.001000 system sec\n> > ! [32.722025 user 3.550460 sys total]\n> > ! 0/0 [0/115128] filesystem blocks in/out\n> > ! 0/0 [0/7482] page faults/reclaims, 0 [0] swaps\n> > ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> > ! 1/6 [12817/80202] voluntary/involuntary context switches\n> >\n> > As you can see it takes over 100 times more time when the extra\n> > transaction is open!\n> >\n> > Any ideas why's that? It extremely affects total query time.\n> >\n> > I know that using prepared statements to solves the problem\n> > completely, so we can deal with it, but we've already spend a lot of\n> > time to go that far and I'm personally a bit curious what's the\n> > fundamental reason for such a behavior.\n> >\n> > I'll be very thankful for any explanation what's going on here!\n> >\n> > Thanks,\n> > BR\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Julian,Here's my complete python script:import psycopg2import randomimport mathimport time\n\nconnection = psycopg2.connect('host=localhost dbname=pgtest user=pgtest password=pgtest')cursor = connection.cursor()while True:    id = random.randrange(1, 1000 * 1000)\n    cursor.execute('select offers.id, offers.name, categories.id, categories.name from offers left join categories on categories.id = offers.category_id where offers.id = %s', (id,))\n    print cursor.fetchall()So I assume that each of 20 instances opens and uses a single transaction. I've just tested the options with connection.autocommit = True at the begging, but it does not change anything. Also in production (where we first noticed the problem) we use a new transaction for every select.\nI start 20 instances of this python script (I use pv to measure performance):parallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\nAnd then in a second console, I open psql and type: \npgtest=> begin; insert into categories (name) select 'category_' || x from generate_series(1,1000) as x;\n\nThe QPS displayed be the first command drops immediately 20-30 times and stays low as long as the transaction with insert is open.Here's the script that I use to initiate the database. You should be able to replicate the situation easily yourself if you want.\ndrop database pgtest;drop user pgtest;create user pgtest with password 'pgtest';create database pgtest;grant all privileges on database pgtest to pgtest;\n\\connect pgtestCREATE TABLE categories(  id bigserial NOT NULL,  name character varying NOT NULL,  CONSTRAINT categories_pkey PRIMARY KEY (id));\nCREATE TABLE offers(  id bigserial NOT NULL,  name character varying NOT NULL,  category_id bigint NOT NULL,  CONSTRAINT offers_pkey PRIMARY KEY (id)\n);\ninsert into categories (name) select 'category_' || x from generate_series(1,1000) as x;insert into offers (name, category_id) select 'offer_' || x, floor(random() * 1000) + 1 from generate_series(1,1000*1000) as x;\nalter table categories owner to pgtest;alter table offers owner to pgtest;Thanks,Bartek\nOn Fri, Sep 20, 2013 at 3:42 AM, Julian <[email protected]> wrote:\n\nHi Bart,\n\nYou are doing heavy random reads in addition to a 1000k row insert\nwithin a single transaction.\n\nAlso it is not clear whether your query within the python loop is itself\nwithin a transaction. (i.e. pyscopg2.connection.autocommit to True,\ndisables transactional queries).\n\nDepending on your pg adapter, it may open a transaction by default even\nthough it may not be required.\n\nSo please clarify whether this is the case.\n\nRegards,\n\nJulian.\n\nOn 20/09/13 10:49, Bartłomiej Romański wrote:\n> Hi all\n>\n> We're experiencing a very strange performance issue. Our setup is a\n> bit more complicated, but we've managed to isolate and replicate the\n> core problem. Here's what we observe:\n>\n> We took a strong machine (128 GB RAM, 8-core CPU, SSD drives...) and\n> installed a fresh copy of PostgreSQL 9.2 (Ubuntu Server 12.04 LTS,\n> default configuration).\n>\n> Then, we created a test database with the following schema:\n>\n> CREATE TABLE offers\n> (\n> id bigserial NOT NULL,\n> name character varying NOT NULL,\n> category_id bigint NOT NULL,\n> CONSTRAINT offers_pkey PRIMARY KEY (id)\n> );\n>\n> CREATE TABLE categories\n> (\n> id bigserial NOT NULL,\n> name character varying NOT NULL,\n> CONSTRAINT categories_pkey PRIMARY KEY (id)\n> );\n>\n> and populated it with in the following way:\n>\n> insert into categories (name) select 'category_' || x from\n> generate_series(1,1000) as x;\n> insert into offers (name, category_id) select 'offer_' || x,\n> floor(random() * 1000) + 1 from generate_series(1,1000*1000) as x;\n>\n> Finally, we created a python script to make simple queries in a loop:\n>\n> while True:\n> id = random.randrange(1, 1000 * 1000)\n> db.execute('select offers.id <http://offers.id/>, offers.name\n\n\n> <http://offers.name/>, categories.id <http://categories.id/>,\n\n\n> categories.name <http://categories.name/> from offers left join\n> categories on categories.id <http://categories.id/> =\n> offers.category_id where offers.id <http://offers.id/> = %s', (id,))\n> print db.fetchall()\n>\n> We start 20 instances simultaneously and measure performance:\n>\n> parallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\n>\n> Normally we observe about 30k QPS what's a satisfying result (without\n> any tuning at all).\n>\n> The problem occurs when we open a second console, start psql and type:\n>\n> pgtest=> begin; insert into categories (name) select 'category_' || x\n> from generate_series(1,1000) as x;\n>\n> We start a transaction and insert 1k records to the 'categories'\n> table. After that performance of the benchmark described above\n> immediately drops to about 1-2k QPS. That's 20-30 times! After closing\n> the transaction (committing or aborting - doesn't matter) it\n> immediately jumps back to 30k QPS.\n>\n> Restarting the running script and other simple tricks do not change\n> anything. The hanging, open transaction is causing a huge slowdown.\n> What's more when doing similar insert (even much larger) to the\n> 'offers' table we do not observe this effect.\n>\n> We analyzed the problem a bit deeper and find out that the actual\n> query execution times are not affected that much. They are constantly\n> close to 0.5 ms. This can be observed in a server log (after enabling\n> appropriate option) and this can be found in 'explain analyze...'\n> result. Also the query plan returned do not change and looks optimal\n> (pkey scan for 'offers' + pkey scan for 'categories').\n>\n> After a few random thought we've finally turned the\n> 'log_planner_stats' option and found out that the planner executions\n> times are highly affected by the hanging transaction. Here's the\n> typical output in the initial situation:\n>\n> 2013-09-17 21:54:59 UTC LOG: PLANNER STATISTICS\n> 2013-09-17 21:54:59 UTC DETAIL: ! system usage stats:\n> ! 0.000137 elapsed 0.000000 user 0.000000 system sec\n> ! [2.169670 user 0.383941 sys total]\n> ! 0/0 [0/11520] filesystem blocks in/out\n> ! 0/0 [0/7408] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/0 [1362/7648] voluntary/involuntary context switches\n>\n> And here's a typical output with a hanging transaction:\n>\n> 2013-09-17 21:56:12 UTC LOG: PLANNER STATISTICS\n> 2013-09-17 21:56:12 UTC DETAIL: ! system usage stats:\n> ! 0.027251 elapsed 0.008999 user 0.001000 system sec\n> ! [32.722025 user 3.550460 sys total]\n> ! 0/0 [0/115128] filesystem blocks in/out\n> ! 0/0 [0/7482] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 1/6 [12817/80202] voluntary/involuntary context switches\n>\n> As you can see it takes over 100 times more time when the extra\n> transaction is open!\n>\n> Any ideas why's that? It extremely affects total query time.\n>\n> I know that using prepared statements to solves the problem\n> completely, so we can deal with it, but we've already spend a lot of\n> time to go that far and I'm personally a bit curious what's the\n> fundamental reason for such a behavior.\n>\n> I'll be very thankful for any explanation what's going on here!\n>\n> Thanks,\n> BR\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 20 Sep 2013 04:03:17 +0200", "msg_from": "=?ISO-8859-2?Q?Bart=B3omiej_Roma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On Thu, Sep 19, 2013 at 5:49 PM, Bartłomiej Romański <[email protected]> wrote:\n\n>\n> Finally, we created a python script to make simple queries in a loop:\n>\n> while True:\n> id = random.randrange(1, 1000 * 1000)\n> db.execute('select offers.id, offers.name, categories.id,\n> categories.name from offers left join categories on categories.id =\n> offers.category_id where offers.id = %s', (id,))\n> print db.fetchall()\n>\n> We start 20 instances simultaneously and measure performance:\n>\n> parallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\n>\n> Normally we observe about 30k QPS what's a satisfying result (without any\n> tuning at all).\n>\n> The problem occurs when we open a second console, start psql and type:\n>\n> pgtest=> begin; insert into categories (name) select 'category_' || x from\n> generate_series(1,1000) as x;\n>\n\n\nRelated topics have been discussed recently, but without much apparent\nresolution.\n\nSee \"In progress INSERT wrecks plans on table\" and \"Performance bug in\nprepared statement binding in 9.2\" also on this list\n\nThe issues are:\n\n1) The planner actually queries the relation to find the end points of the\nvariable ranges, rather than using potentially out-of-date statistics.\n\n2) When doing so, it needs to wade through the \"in-progress\" rows, figuring\nout whether the owning transaction is still in progress or has already\ncommitted or aborted. If the owning transaction *has* committed or rolled\nback, then it can set hint bits so that future executions don't need to do\nthis. But if the owning transaction is still open, then the querying\ntransaction has done the work, but is not able to set any hint bits so\nother executions also need to do the work, repeatedly until the other\ntransactions finishes.\n\n3) Even worse, asking if a given transaction has finished yet can be a\nserious point of system-wide contention, because it takes the\nProcArrayLock, once per row which needs to be checked. So you have 20\nprocesses all fighting over the ProcArrayLock, each doing so 1000 times per\nquery.\n\nOne idea (from Simon, I think) was to remember that a transaction was just\nchecked and was in progress, and not checking it again for future rows. In\nthe future the transaction might have committed, but since it would have\ncommitted after we took the snapshot, thinking it is still in progress\nwould not be a correctness problem, it would just needlessly delay setting\nthe hint bits.\n\nAnother idea was not to check if it were in progress at all, because if it\nis in the snapshot it doesn't matter if it is still in progress. This\nwould a slightly more aggressive way to delay setting the hint bit (but\nalso delay doing the work needed to figure out how to set them).\n\nItems 2 and 3 and can also arise in situations other than paired with 1.\n\n\nCheers,\n\nJeff\n\nOn Thu, Sep 19, 2013 at 5:49 PM, Bartłomiej Romański <[email protected]> wrote:\n\n\nFinally, we created a python script to make simple queries in a loop:\nwhile True:    id = random.randrange(1, 1000 * 1000)    db.execute('select offers.id, offers.name, categories.id, categories.name from offers left join categories on categories.id = offers.category_id where offers.id = %s', (id,))\n    print db.fetchall()We start 20 instances simultaneously and measure performance:\nparallel -j 20 ./test.py -- $(seq 1 20) | pv -i1 -l > /dev/null\nNormally we observe about 30k QPS what's a satisfying result (without any tuning at all).\nThe problem occurs when we open a second console, start psql and type:\n\n\n\n\n\n\npgtest=> begin; insert into categories (name) select 'category_' || x from generate_series(1,1000) as x;Related topics have been discussed recently, but without much apparent resolution.\nSee \"In progress INSERT wrecks plans on table\" and \"Performance bug in prepared statement binding in 9.2\" also on this list\n\nThe issues are:1) The planner actually queries the relation to find the end points of the variable ranges, rather than using potentially out-of-date statistics.\n\n2) When doing so, it needs to wade through the \"in-progress\" rows, figuring out whether the owning transaction is still in progress or has already committed or aborted.  If the owning transaction *has* committed or rolled back, then it can set hint bits so that future executions don't need to do this.  But if the owning transaction is still open, then the querying transaction has done the work, but is not able to set any hint bits so other executions also need to do the work, repeatedly until the other transactions finishes.\n3) Even worse, asking if a given transaction has finished yet can be a serious point of system-wide contention, because it takes the ProcArrayLock, once per row which needs to be checked.  So you have 20 processes all fighting over the ProcArrayLock, each doing so 1000 times per query.\nOne idea (from Simon, I think) was to remember that a transaction was just checked and was in progress, and not checking it again for future rows.  In the future the transaction might have committed, but since it would have committed after we took the snapshot, thinking it is still in progress would not be a correctness problem, it would just needlessly delay setting the hint bits.  \nAnother idea was not to check if it were in progress at all, because if it is in the snapshot it doesn't matter if it is still in progress.  This would a slightly more aggressive way to delay setting the hint bit (but also delay doing the work needed to figure out how to set them).\nItems 2 and 3 and can also arise in situations other than paired with 1.\nCheers,Jeff", "msg_date": "Fri, 20 Sep 2013 15:01:33 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On 21/09/2013, at 00.01, Jeff Janes <[email protected]> wrote:\n> See \"In progress INSERT wrecks plans on table\" and \"Performance bug in prepared statement binding in 9.2\" also on this list\n\nThis feels like the same\nhttp://postgresql.1045698.n5.nabble.com/Slow-query-plan-generation-fast-query-PG-9-2-td5769363.html\n\n\n\n> \n> The issues are:\n> \n> 1) The planner actually queries the relation to find the end points of the variable ranges, rather than using potentially out-of-date statistics.\n> \n\nIn my app i would prefer potentially out-of-date statistics instead.\n\nJesper\nOn 21/09/2013, at 00.01, Jeff Janes <[email protected]> wrote:See \"In progress INSERT wrecks plans on table\" and \"Performance bug in prepared statement binding in 9.2\" also on this listThis feels like the samehttp://postgresql.1045698.n5.nabble.com/Slow-query-plan-generation-fast-query-PG-9-2-td5769363.html\n\nThe issues are:1) The planner actually queries the relation to find the end points of the variable ranges, rather than using potentially out-of-date statistics.\n\nIn my app i would prefer potentially out-of-date statistics instead.Jesper", "msg_date": "Sat, 21 Sep 2013 07:08:20 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging transaction\n (20-30 times)?" }, { "msg_contents": "Jeff Janes <[email protected]> wrote:\n\n> 1) The planner actually queries the relation to find the end\n> points of the variable ranges, rather than using potentially\n> out-of-date statistics.\n\nAre we talking about the probe for the end (or beginning) of an\nindex?  If so, should we even care about visibility of the row\nrelated to the most extreme index entry?  Should we even go to the\nheap during the plan phase?\n\n-- \nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 23 Sep 2013 07:00:16 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging transaction\n (20-30 times)?" }, { "msg_contents": "Kevin Grittner <[email protected]> writes:\n> Are we talking about the probe for the end (or beginning) of an\n> index?  If so, should we even care about visibility of the row\n> related to the most extreme index entry?  Should we even go to the\n> heap during the plan phase?\n\nConsider the case where some transaction inserted a wildly out-of-range\nvalue, then rolled back. If we don't check validity of the heap row,\nwe'd be using that silly endpoint value for planning purposes ---\nindefinitely. That's not an improvement over the situation that the\nprobe is meant to fix.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 Sep 2013 12:35:47 +0200", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging transaction\n (20-30 times)?" }, { "msg_contents": "> Kevin Grittner <[email protected]> writes:\n>> Are we talking about the probe for the end (or beginning) of an\n>> index?  If so, should we even care about visibility of the row\n>> related to the most extreme index entry?  Should we even go to the\n>> heap during the plan phase?\n>\n> Consider the case where some transaction inserted a wildly out-of-range\n> value, then rolled back. If we don't check validity of the heap row,\n> we'd be using that silly endpoint value for planning purposes ---\n> indefinitely. That's not an improvement over the situation that the\n> probe is meant to fix.\n\nApparently it is waiting for locks, cant the check be make in a\n\"non-blocking\" way, so if it ends up waiting for a lock then it just\nassumes non-visible and moves onto the next non-blocking?\n\nThis stuff is a 9.2 feature right? What was the original problem to be\nadressed?\n\n-- \nJesper\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 Sep 2013 17:01:14 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On 09/24/2013 08:01 AM, [email protected] wrote:\n> This stuff is a 9.2 feature right? What was the original problem to be\n> adressed?\n\nEarlier, actually. 9.1? 9.0?\n\nThe problem addressed was that, for tables with a \"progressive\" value\nlike a sequence or a timestamp, the planner tended to estimate 1 row any\ntime the user queried the 10,000 most recent rows due to the stats being\nout-of-date. This resulted in some colossally bad query plans for a\nvery common situation.\n\nSo there's no question that the current behavior is an improvement,\nsince it affects *only* users who have left an idle transaction open for\nlong periods of time, something you're not supposed to do anyway. Not\nthat we shouldn't fix it (and backport the fix), but we don't want to\nregress to the prior planner behavior.\n\nHowever, a solution is not readily obvious:\n\nOn 09/24/2013 03:35 AM, Tom Lane wrote:\n> Kevin Grittner <[email protected]> writes:\n>> > Are we talking about the probe for the end (or beginning) of an\n>> > index? If so, should we even care about visibility of the row\n>> > related to the most extreme index entry? Should we even go to the\n>> > heap during the plan phase?\n> Consider the case where some transaction inserted a wildly out-of-range\n> value, then rolled back. If we don't check validity of the heap row,\n> we'd be using that silly endpoint value for planning purposes ---\n> indefinitely. That's not an improvement over the situation that the\n> probe is meant to fix.\n\nAgreed. And I'll also attest that the patch did fix a chronic bad\nplanner issue.\n\nOn 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a\ngiven transaction has finished yet can be a\n> serious point of system-wide contention, because it takes the\n> ProcArrayLock, once per row which needs to be checked. So you have 20\n> processes all fighting over the ProcArrayLock, each doing so 1000\ntimes per\n> query.\n\nWhy do we need a procarraylock for this? Seems like the solution would\nbe not to take a lock at all; the information on transaction commit is\nin the clog, after all.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 Sep 2013 10:43:06 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "Hi\n\n\nOn Tue, Sep 24, 2013 at 5:01 PM, <[email protected]> wrote:\n\n>\n> Apparently it is waiting for locks, cant the check be make in a\n> \"non-blocking\" way, so if it ends up waiting for a lock then it just\n> assumes non-visible and moves onto the next non-blocking?\n>\n\nNot only, it's a reason but you can get the same slow down with only one\nclient and a bigger insert.\n\nThe issue is that a btree search for one value degenerate to a linear\nsearch other 1000 or more tuples.\n\nAs a matter of fact you get the same slow down after a rollback until\nautovacuum, and if autovacuum can't keep up...\n\nDidier\n\nHiOn Tue, Sep 24, 2013 at 5:01 PM, <[email protected]> wrote:\n\nApparently it is waiting for locks, cant the check be make in a\n\"non-blocking\" way, so if it ends up waiting for a lock then it just\nassumes non-visible and moves onto the next non-blocking?Not only, it's a reason but you can get the same slow down with only  one client and a bigger insert.The issue is that a btree search for one value  degenerate to a linear search other  1000 or more tuples.\nAs a matter of fact you get the same slow down after a rollback until autovacuum, and if autovacuum can't keep up...Didier", "msg_date": "Tue, 24 Sep 2013 20:03:20 +0200", "msg_from": "didier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On Tue, Sep 24, 2013 at 11:03 AM, didier <[email protected]> wrote:\n\n> Hi\n>\n>\n> On Tue, Sep 24, 2013 at 5:01 PM, <[email protected]> wrote:\n>\n>>\n>> Apparently it is waiting for locks, cant the check be make in a\n>> \"non-blocking\" way, so if it ends up waiting for a lock then it just\n>> assumes non-visible and moves onto the next non-blocking?\n>>\n>\n> Not only, it's a reason but you can get the same slow down with only one\n> client and a bigger insert.\n>\n> The issue is that a btree search for one value degenerate to a linear\n> search other 1000 or more tuples.\n>\n> As a matter of fact you get the same slow down after a rollback until\n> autovacuum, and if autovacuum can't keep up...\n>\n\nHave you experimentally verified the last part? btree indices have some\nspecial kill-tuple code which should remove aborted tuples from the index\nthe first time they are encountered, without need for a vacuum.\n\nCheers,\n\nJeff\n\nOn Tue, Sep 24, 2013 at 11:03 AM, didier <[email protected]> wrote:\nHiOn Tue, Sep 24, 2013 at 5:01 PM, <[email protected]> wrote:\n\nApparently it is waiting for locks, cant the check be make in a\n\"non-blocking\" way, so if it ends up waiting for a lock then it just\nassumes non-visible and moves onto the next non-blocking?Not only, it's a reason but you can get the same slow down with only  one client and a bigger insert.\nThe issue is that a btree search for one value  degenerate to a linear search other  1000 or more tuples.\nAs a matter of fact you get the same slow down after a rollback until autovacuum, and if autovacuum can't keep up...Have you experimentally verified the last part?  btree indices have some special kill-tuple code which should remove aborted tuples from the index the first time they are encountered, without need for a vacuum.\nCheers,Jeff", "msg_date": "Tue, 24 Sep 2013 16:30:21 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "> As a matter of fact you get the same slow down after a rollback until\nautovacuum, and if autovacuum can't keep up...\n\nActually, this is not what we observe. The performance goes back to the\nnormal level immediately after committing or aborting the transaction.\n\n\n\nOn Wed, Sep 25, 2013 at 1:30 AM, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Sep 24, 2013 at 11:03 AM, didier <[email protected]> wrote:\n>\n>> Hi\n>>\n>>\n>> On Tue, Sep 24, 2013 at 5:01 PM, <[email protected]> wrote:\n>>\n>>>\n>>> Apparently it is waiting for locks, cant the check be make in a\n>>> \"non-blocking\" way, so if it ends up waiting for a lock then it just\n>>> assumes non-visible and moves onto the next non-blocking?\n>>>\n>>\n>> Not only, it's a reason but you can get the same slow down with only one\n>> client and a bigger insert.\n>>\n>> The issue is that a btree search for one value degenerate to a linear\n>> search other 1000 or more tuples.\n>>\n>> As a matter of fact you get the same slow down after a rollback until\n>> autovacuum, and if autovacuum can't keep up...\n>>\n>\n> Have you experimentally verified the last part? btree indices have some\n> special kill-tuple code which should remove aborted tuples from the index\n> the first time they are encountered, without need for a vacuum.\n>\n> Cheers,\n>\n> Jeff\n>\n\n> As a matter of fact you get the same slow down after a rollback until autovacuum, and if autovacuum can't keep up...\nActually, this is not what we observe. The performance goes back to the normal level immediately after committing or aborting the transaction.\nOn Wed, Sep 25, 2013 at 1:30 AM, Jeff Janes <[email protected]> wrote:\nOn Tue, Sep 24, 2013 at 11:03 AM, didier <[email protected]> wrote:\n\nHiOn Tue, Sep 24, 2013 at 5:01 PM, <[email protected]> wrote:\n\nApparently it is waiting for locks, cant the check be make in a\n\"non-blocking\" way, so if it ends up waiting for a lock then it just\nassumes non-visible and moves onto the next non-blocking?Not only, it's a reason but you can get the same slow down with only  one client and a bigger insert.\n\n\nThe issue is that a btree search for one value  degenerate to a linear search other  1000 or more tuples.\nAs a matter of fact you get the same slow down after a rollback until autovacuum, and if autovacuum can't keep up...Have you experimentally verified the last part?  btree indices have some special kill-tuple code which should remove aborted tuples from the index the first time they are encountered, without need for a vacuum.\nCheers,Jeff", "msg_date": "Wed, 25 Sep 2013 01:43:54 +0200", "msg_from": "=?ISO-8859-2?Q?Bart=B3omiej_Roma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "Hi\n\n\nOn Wed, Sep 25, 2013 at 1:30 AM, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Sep 24, 2013 at 11:03 AM, didier <[email protected]> wrote:\n>\n>>\n>> As a matter of fact you get the same slow down after a rollback until\n>> autovacuum, and if autovacuum can't keep up...\n>>\n>\n> Have you experimentally verified the last part? btree indices have some\n> special kill-tuple code which should remove aborted tuples from the index\n> the first time they are encountered, without need for a vacuum.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n\n>\n>\n\nYes my bad, it works but there's leftover junk and a vacuum is still\nneeded\n\nRunning above test with autovacuum off, 1 client and insert 50 000 on\npostgresql 9.4 qit version.\nBefore insert 2 000 queries/s\nafter insert 80/s\nafter rollback 800/s (back to 2 000/s if commit)\nafter vacuum 2 000 /s again and vacuum output:\n\nINFO: vacuuming\n\"public.categories\"\n\nINFO: scanned index \"categories_pkey\" to remove 50000 row\nversions\n\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.01\nsec.\n\nINFO: \"categories\": removed 50000 row versions in 319\npages\n\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00\nsec.\n\nINFO: index \"categories_pkey\" now contains 1000 row versions in 278\npages\n\nDETAIL: 50000 index row versions were\nremoved.\n\n272 index pages have been deleted, 136 are currently\nreusable.\n\nCPU 0.00s/0.00u sec elapsed 0.00\nsec.\n\nINFO: \"categories\": found 50000 removable, 1000 nonremovable row versions\nin 325 out of 325 pages\nDETAIL: 0 dead row versions cannot be removed\nyet.\n\nThere were 0 unused item\npointers.\n\n0 pages are entirely\nempty.\n\nCPU 0.00s/0.01u sec elapsed 0.02\nsec.\n\nINFO: \"categories\": stopping truncate due to conflicting lock\nrequest\n\nINFO: vacuuming\n\"pg_toast.pg_toast_16783\"\n\nINFO: index \"pg_toast_16783_index\" now contains 0 row versions in 1\npages\n\nDETAIL: 0 index row versions were\nremoved.\n\n0 index pages have been deleted, 0 are currently\nreusable.\n\nCPU 0.00s/0.00u sec elapsed 0.00\nsec.\n\nINFO: \"pg_toast_16783\": found 0 removable, 0 nonremovable row versions in\n0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed\nyet.\n\nThere were 0 unused item\npointers.\n\n0 pages are entirely\nempty.\n\nCPU 0.00s/0.00u sec elapsed 0.00\nsec.\n\nINFO: analyzing\n\"public.categories\"\n\nINFO: \"categories\": scanned 325 of 325 pages, containing 1000 live rows\nand 0 dead rows; 1000 rows in sample, 1000 estimated total rows\n\nperf output after rollback but before vacuum\n 93.36% postgres\n/var/lib/vz/root/165/usr/local/pgsql/bin/postgres\n |\n |--41.51%-- _bt_checkkeys\n | |\n | |--93.03%-- _bt_readpage\n | | |\n | | |--97.46%-- _bt_steppage\n | | | _bt_first\n | | | btgettuple\n | | | FunctionCall2Coll\n | | | index_getnext_tid\n\n\nDidier\n\nHiOn Wed, Sep 25, 2013 at 1:30 AM, Jeff Janes <[email protected]> wrote:\nOn Tue, Sep 24, 2013 at 11:03 AM, didier <[email protected]> wrote:\n\nAs a matter of fact you get the same slow down after a rollback until autovacuum, and if autovacuum can't keep up...\nHave you experimentally verified the last part?  btree indices have some special kill-tuple code which should remove aborted tuples from the index the first time they are encountered, without need for a vacuum.\nCheers,Jeff  \n Yes my bad, it works but there's leftover  junk and a vacuum is still needed Running above test with autovacuum off, 1 client  and insert 50 000 on postgresql 9.4 qit version.\nBefore insert 2 000 queries/safter insert 80/safter rollback 800/s (back to 2 000/s if commit)after vacuum 2 000 /s again and vacuum output:INFO:  vacuuming \"public.categories\"                                                                                                                 \r\nINFO:  scanned index \"categories_pkey\" to remove 50000 row versions                                                                                  DETAIL:  CPU 0.00s/0.00u sec elapsed 0.01 sec.                                                                                                       \r\nINFO:  \"categories\": removed 50000 row versions in 319 pages                                                                                         DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.                                                                                                       \r\nINFO:  index \"categories_pkey\" now contains 1000 row versions in 278 pages                                                                           DETAIL:  50000 index row versions were removed.                                                                                                      \r\n272 index pages have been deleted, 136 are currently reusable.                                                                                       CPU 0.00s/0.00u sec elapsed 0.00 sec.                                                                                                                \r\nINFO:  \"categories\": found 50000 removable, 1000 nonremovable row versions in 325 out of 325 pages                                                   DETAIL:  0 dead row versions cannot be removed yet.                                                                                                  \r\nThere were 0 unused item pointers.                                                                                                                   0 pages are entirely empty.                                                                                                                          \r\nCPU 0.00s/0.01u sec elapsed 0.02 sec.                                                                                                                INFO:  \"categories\": stopping truncate due to conflicting lock request                                                                               \r\nINFO:  vacuuming \"pg_toast.pg_toast_16783\"                                                                                                           INFO:  index \"pg_toast_16783_index\" now contains 0 row versions in 1 pages                                                                           \r\nDETAIL:  0 index row versions were removed.                                                                                                          0 index pages have been deleted, 0 are currently reusable.                                                                                           \r\nCPU 0.00s/0.00u sec elapsed 0.00 sec.                                                                                                                INFO:  \"pg_toast_16783\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages                                                          \r\nDETAIL:  0 dead row versions cannot be removed yet.                                                                                                  There were 0 unused item pointers.                                                                                                                   \r\n0 pages are entirely empty.                                                                                                                          CPU 0.00s/0.00u sec elapsed 0.00 sec.                                                                                                                \r\nINFO:  analyzing \"public.categories\"                                                                                                                 INFO:  \"categories\": scanned 325 of 325 pages, containing 1000 live rows and 0 dead rows; 1000 rows in sample, 1000 estimated total rows          \r\n   perf output after rollback but before vacuum    93.36%         postgres  /var/lib/vz/root/165/usr/local/pgsql/bin/postgres                     |                          |--41.51%-- _bt_checkkeys                |          |          \r\n                |          |--93.03%-- _bt_readpage                |          |          |                          |          |          |--97.46%-- _bt_steppage                |          |          |          _bt_first\r\n                |          |          |          btgettuple                |          |          |          FunctionCall2Coll                |          |          |          index_getnext_tidDidier", "msg_date": "Wed, 25 Sep 2013 06:49:25 +0200", "msg_from": "didier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On Tue, Sep 24, 2013 at 3:35 AM, Tom Lane <[email protected]> wrote:\n\n> Kevin Grittner <[email protected]> writes:\n> > Are we talking about the probe for the end (or beginning) of an\n> > index? If so, should we even care about visibility of the row\n> > related to the most extreme index entry? Should we even go to the\n> > heap during the plan phase?\n>\n> Consider the case where some transaction inserted a wildly out-of-range\n> value, then rolled back. If we don't check validity of the heap row,\n> we'd be using that silly endpoint value for planning purposes ---\n> indefinitely.\n\n\n\nWould it really be indefinite? Would it be different from if someone\ninserted a wild value, committed, then deleted it and committed that? It\nseems like eventually the histogram would have to get rebuilt with the\nability to shrink the range.\n\nTo get really complicated, it could stop at an in-progress tuple and use\nits value for immediate purposes, but suppress storing it in the histogram\n(storing only committed, not in-progress, values).\n\nCheers,\n\nJeff\n\nOn Tue, Sep 24, 2013 at 3:35 AM, Tom Lane <[email protected]> wrote:\nKevin Grittner <[email protected]> writes:\n> Are we talking about the probe for the end (or beginning) of an\n> index?  If so, should we even care about visibility of the row\n> related to the most extreme index entry?  Should we even go to the\n> heap during the plan phase?\n\nConsider the case where some transaction inserted a wildly out-of-range\nvalue, then rolled back.  If we don't check validity of the heap row,\nwe'd be using that silly endpoint value for planning purposes ---\nindefinitely.Would it really be indefinite?  Would it be different from if someone inserted a wild value, committed, then deleted it and committed that?  It seems like eventually the histogram would have to get rebuilt with the ability to shrink the range.\nTo get really complicated, it could stop at an in-progress tuple and use its value for immediate purposes, but suppress storing it in the histogram (storing only committed, not in-progress, values).\nCheers,Jeff", "msg_date": "Tue, 24 Sep 2013 23:48:42 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On Tue, Sep 24, 2013 at 10:43 AM, Josh Berkus <[email protected]> wrote:\n\n> On 09/24/2013 08:01 AM, [email protected] wrote:\n> > This stuff is a 9.2 feature right? What was the original problem to be\n> > adressed?\n>\n> Earlier, actually. 9.1? 9.0?\n>\n> The problem addressed was that, for tables with a \"progressive\" value\n> like a sequence or a timestamp, the planner tended to estimate 1 row any\n> time the user queried the 10,000 most recent rows due to the stats being\n> out-of-date. This resulted in some colossally bad query plans for a\n> very common situation.\n>\n> So there's no question that the current behavior is an improvement,\n> since it affects *only* users who have left an idle transaction open for\n> long periods of time, something you're not supposed to do anyway.\n\n\nSome transaction just take a long time to complete their work. If the\nfirst thing it does is insert these poisoned values, then go on to do other\nintensive work on other tables, it can do some serious damage without being\nidle.\n\n\n\n> Not\n> that we shouldn't fix it (and backport the fix), but we don't want to\n> regress to the prior planner behavior.\n>\n> However, a solution is not readily obvious:\n>\n\nThe mergejoinscansel code is almost pathologically designed to exercise\nthis case (which seems to be what is doing in the original poster) because\nit systematically probes the highest and lowest values from one table\nagainst the other. If they have the same range, that means it will always\nbe testing the upper limit. Perhaps mergejoinscansel could pass a flag to\nprevent the look-up from happening. My gut feeling is that mergejoin it\nwould not be very sensitive to the progressive value issue, but I can't\nreally back that up. On the other hand, if we could just make getting the\nactual value faster then everyone would be better off.\n\n\n>\n> On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a\n> given transaction has finished yet can be a\n> > serious point of system-wide contention, because it takes the\n> > ProcArrayLock, once per row which needs to be checked. So you have 20\n> > processes all fighting over the ProcArrayLock, each doing so 1000\n> times per\n> > query.\n>\n> Why do we need a procarraylock for this? Seems like the solution would\n> be not to take a lock at all; the information on transaction commit is\n> in the clog, after all.\n>\n\nMy understanding is that you are not allowed to check the clog until after\nyou verify the transaction is no longer in progress, otherwise you open up\nrace conditions.\n\nCheers,\n\nJeff\n\nOn Tue, Sep 24, 2013 at 10:43 AM, Josh Berkus <[email protected]> wrote:\nOn 09/24/2013 08:01 AM, [email protected] wrote:\n\n> This stuff is a 9.2 feature right? What was the original problem to be\n> adressed?\n\nEarlier, actually.  9.1?  9.0?\n\nThe problem addressed was that, for tables with a \"progressive\" value\nlike a sequence or a timestamp, the planner tended to estimate 1 row any\ntime the user queried the 10,000 most recent rows due to the stats being\nout-of-date.  This resulted in some colossally bad query plans for a\nvery common situation.\n\nSo there's no question that the current behavior is an improvement,\nsince it affects *only* users who have left an idle transaction open for\nlong periods of time, something you're not supposed to do anyway. Some transaction just take a long time to complete their work.  If the first thing it does is insert these poisoned values, then go on to do other intensive work on other tables, it can do some serious damage without being idle.\n  Not\nthat we shouldn't fix it (and backport the fix), but we don't want to\nregress to the prior planner behavior.\n\nHowever, a solution is not readily obvious:The mergejoinscansel code is almost pathologically designed to exercise this case (which seems to be what is doing in the original poster) because it systematically probes the highest and lowest values from one table against the other.  If they have the same range, that means it will always be testing the upper limit.  Perhaps mergejoinscansel could pass a flag to prevent the look-up from happening.  My gut feeling is that mergejoin it would not be very sensitive to the progressive value issue, but I can't really back that up.  On the other hand, if we could just make getting the actual value faster then everyone would be better off.\n \n\nOn 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a\ngiven transaction has finished yet can be a\n> serious point of system-wide contention, because it takes the\n> ProcArrayLock, once per row which needs to be checked.  So you have 20\n> processes all fighting over the ProcArrayLock, each doing so 1000\ntimes per\n> query.\n\nWhy do we need a procarraylock for this?  Seems like the solution would\nbe not to take a lock at all; the information on transaction commit is\nin the clog, after all.My understanding is that you are not allowed to check the clog until after you verify the transaction is no longer in progress, otherwise you open up race conditions.\n Cheers,Jeff", "msg_date": "Wed, 25 Sep 2013 00:06:06 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On 09/25/2013 12:06 AM, Jeff Janes wrote:\n>> Why do we need a procarraylock for this? Seems like the solution would\n>> be not to take a lock at all; the information on transaction commit is\n>> in the clog, after all.\n>>\n> \n> My understanding is that you are not allowed to check the clog until after\n> you verify the transaction is no longer in progress, otherwise you open up\n> race conditions.\n\nIn this particular case, I'd argue that we don't care about race\nconditions -- it's a plan estimate. We certainly care about them a lot\nless than lock-blocks.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 10:36:24 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On 2013-09-25 00:06:06 -0700, Jeff Janes wrote:\n> > On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a\n> > given transaction has finished yet can be a\n> > > serious point of system-wide contention, because it takes the\n> > > ProcArrayLock, once per row which needs to be checked. So you have 20\n> > > processes all fighting over the ProcArrayLock, each doing so 1000\n> > times per\n> > > query.\n\nThat should be gone in master, we don't use SnapshotNow anymore which\nhad those TransactionIdIsInProgress() calls you're probably referring\nto. The lookups discussed in this thread now use the statement's\nsnapshot. And all those have their own copy of the currently running\ntransactions.\n\n> > Why do we need a procarraylock for this? Seems like the solution would\n> > be not to take a lock at all; the information on transaction commit is\n> > in the clog, after all.\n\nMore clog accesses would hardly improve the situation.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 19:53:13 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On Wed, Sep 25, 2013 at 10:53 AM, Andres Freund <[email protected]>wrote:\n\n> On 2013-09-25 00:06:06 -0700, Jeff Janes wrote:\n> > > On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a\n> > > given transaction has finished yet can be a\n> > > > serious point of system-wide contention, because it takes the\n> > > > ProcArrayLock, once per row which needs to be checked. So you have\n> 20\n> > > > processes all fighting over the ProcArrayLock, each doing so 1000\n> > > times per\n> > > > query.\n>\n> That should be gone in master, we don't use SnapshotNow anymore which\n> had those TransactionIdIsInProgress() calls you're probably referring\n> to. The lookups discussed in this thread now use the statement's\n> snapshot. And all those have their own copy of the currently running\n> transactions.\n>\n\n\nSee HeapTupleSatisfiesMVCC, near line 943 of tqual.c:\n\n else if (TransactionIdIsInProgress(HeapTupleHeaderGetXmin(tuple)))\n return false;\n else if (TransactionIdDidCommit(HeapTupleHeaderGetXmin(tuple)))\n SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED,\n HeapTupleHeaderGetXmin(tuple));\n\n\nIf we guarded that check by moving up line 961 to before 943:\n\n if (XidInMVCCSnapshot(HeapTupleHeaderGetXmin(tuple), snapshot))\n return false; /* treat as still in progress */\n\nThen we could avoid the contention, as that check only refers to local\nmemory.\n\nAs far as I can tell, the only downside of doing that is that, since hint\nbits might be set later, it is possible some dirty pages will get written\nunhinted and then re-dirtied by the hint bit setting, when more aggressive\nsetting would have only one combined dirty write instead. But that seems\nrather hypothetical, and if it really is a problem we should probably\ntackle it directly rather than by barring other optimizations.\n\nCheers,\n\nJeff\n\nOn Wed, Sep 25, 2013 at 10:53 AM, Andres Freund <[email protected]> wrote:\nOn 2013-09-25 00:06:06 -0700, Jeff Janes wrote:\n\n> > On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a\n> > given transaction has finished yet can be a\n> > > serious point of system-wide contention, because it takes the\n> > > ProcArrayLock, once per row which needs to be checked.  So you have 20\n> > > processes all fighting over the ProcArrayLock, each doing so 1000\n> > times per\n> > > query.\n\nThat should be gone in master, we don't use SnapshotNow anymore which\nhad those TransactionIdIsInProgress() calls you're probably referring\nto. The lookups discussed in this thread now use the statement's\nsnapshot. And all those have their own copy of the currently running\ntransactions.See HeapTupleSatisfiesMVCC, near line 943 of tqual.c:        else if (TransactionIdIsInProgress(HeapTupleHeaderGetXmin(tuple)))\n            return false;        else if (TransactionIdDidCommit(HeapTupleHeaderGetXmin(tuple)))            SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED,                        HeapTupleHeaderGetXmin(tuple));\n If we guarded that check by moving up line 961 to before 943:    if (XidInMVCCSnapshot(HeapTupleHeaderGetXmin(tuple), snapshot))        return false;           /* treat as still in progress */\nThen we could avoid the contention, as that check only refers to local memory. As far as I can tell, the only downside of doing that is that, since hint bits might be set later, it is possible some dirty pages will get written unhinted and then re-dirtied by the hint bit setting, when more aggressive setting would have only one combined dirty write instead.  But that seems rather hypothetical, and if it really is a problem we should probably tackle it directly rather than by barring other optimizations.\nCheers,Jeff", "msg_date": "Wed, 25 Sep 2013 11:17:51 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On 2013-09-25 11:17:51 -0700, Jeff Janes wrote:\n> On Wed, Sep 25, 2013 at 10:53 AM, Andres Freund <[email protected]>wrote:\n> \n> > On 2013-09-25 00:06:06 -0700, Jeff Janes wrote:\n> > > > On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a\n> > > > given transaction has finished yet can be a\n> > > > > serious point of system-wide contention, because it takes the\n> > > > > ProcArrayLock, once per row which needs to be checked. So you have\n> > 20\n> > > > > processes all fighting over the ProcArrayLock, each doing so 1000\n> > > > times per\n> > > > > query.\n> >\n> > That should be gone in master, we don't use SnapshotNow anymore which\n> > had those TransactionIdIsInProgress() calls you're probably referring\n> > to. The lookups discussed in this thread now use the statement's\n> > snapshot. And all those have their own copy of the currently running\n> > transactions.\n\n> See HeapTupleSatisfiesMVCC, near line 943 of tqual.c:\n> \n> else if (TransactionIdIsInProgress(HeapTupleHeaderGetXmin(tuple)))\n> return false;\n> else if (TransactionIdDidCommit(HeapTupleHeaderGetXmin(tuple)))\n> SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED,\n> HeapTupleHeaderGetXmin(tuple));\n> \n\nHm, sorry, misrembered things a bit there.\n\n> If we guarded that check by moving up line 961 to before 943:\n> \n> if (XidInMVCCSnapshot(HeapTupleHeaderGetXmin(tuple), snapshot))\n> return false; /* treat as still in progress */\n> \n> Then we could avoid the contention, as that check only refers to local\n> memory.\n\nThat wouldn't be correct afaics - the current TransactionIdIsInProgress\ncallsite is only called when no HEAP_XMIN_COMMITTED was set. So you\nwould have to duplicate it.\n\n> As far as I can tell, the only downside of doing that is that, since hint\n> bits might be set later, it is possible some dirty pages will get written\n> unhinted and then re-dirtied by the hint bit setting, when more aggressive\n> setting would have only one combined dirty write instead. But that seems\n> rather hypothetical, and if it really is a problem we should probably\n> tackle it directly rather than by barring other optimizations.\n\nI am - as evidenced - too tired to think about this properly, but I\nthink you might be right here.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 20:43:30 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "Andres, Jeff,\n\n\n>> As far as I can tell, the only downside of doing that is that, since hint\n>> bits might be set later, it is possible some dirty pages will get written\n>> unhinted and then re-dirtied by the hint bit setting, when more aggressive\n>> setting would have only one combined dirty write instead. But that seems\n>> rather hypothetical, and if it really is a problem we should probably\n>> tackle it directly rather than by barring other optimizations.\n> \n> I am - as evidenced - too tired to think about this properly, but I\n> think you might be right here.\n\nAny thoughts on a fix for this we could get into 9.2.5?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 Sep 2013 13:57:02 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "On 2013-09-27 13:57:02 -0700, Josh Berkus wrote:\n> Andres, Jeff,\n> \n> \n> >> As far as I can tell, the only downside of doing that is that, since hint\n> >> bits might be set later, it is possible some dirty pages will get written\n> >> unhinted and then re-dirtied by the hint bit setting, when more aggressive\n> >> setting would have only one combined dirty write instead. But that seems\n> >> rather hypothetical, and if it really is a problem we should probably\n> >> tackle it directly rather than by barring other optimizations.\n> > \n> > I am - as evidenced - too tired to think about this properly, but I\n> > think you might be right here.\n> \n> Any thoughts on a fix for this we could get into 9.2.5?\n\nI don't see much chance to apply anything like this in a\nbackbranch. Changing IO patterns in a noticeable way in a minor release\nis just asking for trouble.\n\nAlso, this really isn't going to fix the issue discussed here - this was\njust about the additional ProcArrayLock contention. I don't think it\nwould change anything dramatical in your case.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 Sep 2013 23:14:39 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> Also, this really isn't going to fix the issue discussed here - this was\n> just about the additional ProcArrayLock contention. I don't think it\n> would change anything dramatical in your case.\n\nAll of these proposals are pretty scary for back-patching purposes,\nanyway. I think what we should consider doing is just changing\nget_actual_variable_range() to use a cheaper snapshot type, as in\nthe attached patch (which is for 9.3 but applies to 9.2 with slight\noffset). On my machine, this seems to make the pathological behavior\nin BR's test case go away just fine. I'd be interested to hear what\nit does in the real-world scenarios being complained of.\n\n\t\t\tregards, tom lane\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 11 Nov 2013 14:48:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging transaction\n (20-30 times)?" }, { "msg_contents": "I wrote:\n> Andres Freund <[email protected]> writes:\n>> Also, this really isn't going to fix the issue discussed here - this was\n>> just about the additional ProcArrayLock contention. I don't think it\n>> would change anything dramatical in your case.\n\n> All of these proposals are pretty scary for back-patching purposes,\n> anyway. I think what we should consider doing is just changing\n> get_actual_variable_range() to use a cheaper snapshot type, as in\n> the attached patch (which is for 9.3 but applies to 9.2 with slight\n> offset). On my machine, this seems to make the pathological behavior\n> in BR's test case go away just fine. I'd be interested to hear what\n> it does in the real-world scenarios being complained of.\n\nWell, it's three months later, and none of the people who were complaining\nso vociferously in this thread seem to have bothered to test the proposed\nsolution.\n\nHowever, over at\nhttp://www.postgresql.org/message-id/CAFj8pRDHyAK_2JHSVKZ5YQNGQmFGVcJKcpBXhFaS=vSSCH-vNw@mail.gmail.com\nPavel did test it and reported that it successfully alleviates his\nreal-world problem. So I'm now inclined to commit this. Objections?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Feb 2014 11:06:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging transaction\n (20-30 times)?" }, { "msg_contents": "On 02/25/2014 08:06 AM, Tom Lane wrote:\n> Well, it's three months later, and none of the people who were complaining\n> so vociferously in this thread seem to have bothered to test the proposed\n> solution.\n\nSorry about that. The client lost interest once they had a workaround\n(fixing the hanging transaction), and I don't have direct access to\ntheir test workload -- nor was I able to reproduce the issue on a purely\nsynthetic workload.\n\n> However, over at\n> http://www.postgresql.org/message-id/CAFj8pRDHyAK_2JHSVKZ5YQNGQmFGVcJKcpBXhFaS=vSSCH-vNw@mail.gmail.com\n> Pavel did test it and reported that it successfully alleviates his\n> real-world problem. So I'm now inclined to commit this. Objections?\n\nNone from me.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Feb 2014 16:21:17 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner performance extremely affected by an hanging\n transaction (20-30 times)?" } ]
[ { "msg_contents": "Hi There,\n\n \n\nI have hit a query plan issue that I believe is a bug or under-estimation,\nand would like to know if there it is known or if there is any workaround.\n\n \n\nThis event_log table has 4 million rows. \n\n\"log_id\" is the primary key (bigint), \n\nthere is a composite index \"event_data_search\" over (event::text,\ninsert_time::datetime).\n\n \n\nQuery A:\n\nSELECT min(log_id) FROM event_log \n\nWHERE event='S-Create' AND\n\ninsert_time>'2013-09-15' and insert_time<'2013-09-16'\n\n \n\nQuery B: \n\nSELECT log_id FROM event_log \n\nWHERE event='S-Create' AND\n\ninsert_time>'2013-09-15' and insert_time<'2013-09-16'\n\nORDER BY log_id\n\n \n\nWhat I want to achieve is Query A - get the min log_id within a range. But\nit is very slow, taking 10 or 20 seconds.\n\nIf I don't limit the output to LIMIT 1 - like Query B - then it is\nsub-second fast.\n\n \n\nExplain of A - take 10~20 seconds to run\n\nLimit (cost=0.00..132.54 rows=1 width=8)\n\n -> Index Scan using event_log_pkey on event_log (cost=0.00..1503484.33\nrows=11344 width=8)\n\n Filter: ((insert_time > '2013-09-15 00:00:00'::timestamp without\ntime zone) AND (insert_time < '2013-09-16 00:00:00'::timestamp without time\nzone) AND (event = 'S-Create'::text))\n\n \n\nExplain of B - take a few milliseconds to run\n\nSort (cost=41015.85..41021.52 rows=11344 width=8)\n\n Sort Key: log_id\n\n -> Bitmap Heap Scan on event_log (cost=311.42..40863.05 rows=11344\nwidth=8)\n\n Recheck Cond: ((event = 'S-Create'::text) AND (insert_time >\n'2013-09-15 00:00:00'::timestamp without time zone) AND (insert_time <\n'2013-09-16 00:00:00'::timestamp without time zone))\n\n -> Bitmap Index Scan on event_data_search (cost=0.00..310.86\nrows=11344 width=0)\n\n Index Cond: ((event = 'S-Create'::text) AND (insert_time >\n'2013-09-15 00:00:00'::timestamp without time zone) AND (insert_time <\n'2013-09-16 00:00:00'::timestamp without time zone))\n\n \n\nPlan of A thought that the index scan node will get the first row right at\n0.00, and hence the limit node will get all the rows needed within 132.54\n(because event_log_pkey are sorted)\n\nI would like to point out that - this 0.00 estimation omits the fact that it\nactually takes a much longer time for the index scan node to get the first\nrow, because 3.99M rows it comes across won't meet the condition filter at\nall.\n\n \n\nOther background info:\n\nThe event_log table has been vacuumed and analyzed.\n\nI have PostgreSQL 9.2.4 (x64) on Windows Server 2008 R2 with me. 8GB ram.\n1*Xeon E5606.\n\n \n\nThanks,\n\nSam\n\n\nHi There, I have hit a query plan issue that I believe is a bug or under-estimation, and would like to know if there it is known or if there is any workaround… This event_log table has 4 million rows. “log_id” is the primary key (bigint), there is a composite index “event_data_search” over (event::text, insert_time::datetime). Query A:SELECT min(log_id) FROM event_log WHERE event='S-Create' ANDinsert_time>'2013-09-15' and insert_time<'2013-09-16' Query B: SELECT log_id FROM event_log WHERE event='S-Create' ANDinsert_time>'2013-09-15' and insert_time<'2013-09-16'ORDER BY log_id What I want to achieve is Query A – get the min log_id within a range. But it is very slow, taking 10 or 20 seconds.If I don’t limit the output to LIMIT 1 – like Query B – then it is sub-second fast. Explain of A – take 10~20 seconds to runLimit  (cost=0.00..132.54 rows=1 width=8)  ->  Index Scan using event_log_pkey on event_log  (cost=0.00..1503484.33 rows=11344 width=8)        Filter: ((insert_time > '2013-09-15 00:00:00'::timestamp without time zone) AND (insert_time < '2013-09-16 00:00:00'::timestamp without time zone) AND (event = 'S-Create'::text)) Explain of B – take a few milliseconds to runSort  (cost=41015.85..41021.52 rows=11344 width=8)  Sort Key: log_id  ->  Bitmap Heap Scan on event_log  (cost=311.42..40863.05 rows=11344 width=8)        Recheck Cond: ((event = 'S-Create'::text) AND (insert_time > '2013-09-15 00:00:00'::timestamp without time zone) AND (insert_time < '2013-09-16 00:00:00'::timestamp without time zone))        ->  Bitmap Index Scan on event_data_search  (cost=0.00..310.86 rows=11344 width=0)              Index Cond: ((event = 'S-Create'::text) AND (insert_time > '2013-09-15 00:00:00'::timestamp without time zone) AND (insert_time < '2013-09-16 00:00:00'::timestamp without time zone)) Plan of A thought that the index scan node will get the first row right at 0.00, and hence the limit node will get all the rows needed within 132.54 (because event_log_pkey are sorted)I would like to point out that – this 0.00 estimation omits the fact that it actually takes a much longer time for the index scan node to get the first row, because 3.99M rows it comes across won’t meet the condition filter at all. Other background info:The event_log table has been vacuumed and analyzed.I have PostgreSQL 9.2.4 (x64) on Windows Server 2008 R2 with me. 8GB ram. 1*Xeon E5606. Thanks,Sam", "msg_date": "Tue, 24 Sep 2013 17:24:07 +0800", "msg_from": "\"Sam Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "On Tue, Sep 24, 2013 at 4:24 AM, Sam Wong <[email protected]> wrote:\n> Hi There,\n>\n>\n>\n> I have hit a query plan issue that I believe is a bug or under-estimation,\n> and would like to know if there it is known or if there is any workaround…\n>\n>\n>\n> This event_log table has 4 million rows.\n>\n> “log_id” is the primary key (bigint),\n>\n> there is a composite index “event_data_search” over (event::text,\n> insert_time::datetime).\n>\n>\n>\n> Query A:\n>\n> SELECT min(log_id) FROM event_log\n>\n> WHERE event='S-Create' AND\n>\n> insert_time>'2013-09-15' and insert_time<'2013-09-16'\n>\n>\n>\n> Query B:\n>\n> SELECT log_id FROM event_log\n>\n> WHERE event='S-Create' AND\n>\n> insert_time>'2013-09-15' and insert_time<'2013-09-16'\n>\n> ORDER BY log_id\n>\n>\n>\n> What I want to achieve is Query A – get the min log_id within a range. But\n> it is very slow, taking 10 or 20 seconds.\n>\n> If I don’t limit the output to LIMIT 1 – like Query B – then it is\n> sub-second fast.\n>\n>\n>\n> Explain of A – take 10~20 seconds to run\n>\n> Limit (cost=0.00..132.54 rows=1 width=8)\n>\n> -> Index Scan using event_log_pkey on event_log (cost=0.00..1503484.33\n> rows=11344 width=8)\n>\n> Filter: ((insert_time > '2013-09-15 00:00:00'::timestamp without\n> time zone) AND (insert_time < '2013-09-16 00:00:00'::timestamp without time\n> zone) AND (event = 'S-Create'::text))\n>\n>\n>\n> Explain of B – take a few milliseconds to run\n>\n> Sort (cost=41015.85..41021.52 rows=11344 width=8)\n>\n> Sort Key: log_id\n>\n> -> Bitmap Heap Scan on event_log (cost=311.42..40863.05 rows=11344\n> width=8)\n>\n> Recheck Cond: ((event = 'S-Create'::text) AND (insert_time >\n> '2013-09-15 00:00:00'::timestamp without time zone) AND (insert_time <\n> '2013-09-16 00:00:00'::timestamp without time zone))\n>\n> -> Bitmap Index Scan on event_data_search (cost=0.00..310.86\n> rows=11344 width=0)\n>\n> Index Cond: ((event = 'S-Create'::text) AND (insert_time >\n> '2013-09-15 00:00:00'::timestamp without time zone) AND (insert_time <\n> '2013-09-16 00:00:00'::timestamp without time zone))\n>\n>\n>\n> Plan of A thought that the index scan node will get the first row right at\n> 0.00, and hence the limit node will get all the rows needed within 132.54\n> (because event_log_pkey are sorted)\n>\n> I would like to point out that – this 0.00 estimation omits the fact that it\n> actually takes a much longer time for the index scan node to get the first\n> row, because 3.99M rows it comes across won’t meet the condition filter at\n> all.\n\nI think you got A and B mixed up there. Can you post explain analyze\n(not just 'explain'){ of the slow plan?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 Sep 2013 16:49:10 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong <[email protected]> wrote:\n> This event_log table has 4 million rows.\n>\n> “log_id” is the primary key (bigint),\n>\n> there is a composite index “event_data_search” over (event::text,\n> insert_time::datetime).\n\n\nI think you need to add log_id to that composite index to get pg to use it.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 Sep 2013 18:56:37 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "Hi All and Merlin,\n\nSo here is the explain analyze output.\n\n------\nQuery A -- single row output, but very slow query\n------\nSELECT min(log_id) FROM event_log\nWHERE event='S-Create' AND\ninsert_time>'2013-09-15' and insert_time<'2013-09-16'\n\nhttp://explain.depesz.com/s/3H5\nResult (cost=134.48..134.49 rows=1 width=0) (actual\ntime=348370.719..348370.720 rows=1 loops=1)\n Output: $0\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..134.48 rows=1 width=8) (actual\ntime=348370.712..348370.713 rows=1 loops=1)\n Output: uco.event_log.log_id\n -> Index Scan using event_log_pkey on uco.event_log\n(cost=0.00..1525564.02 rows=11344 width=8) (actual\ntime=348370.709..348370.709 rows=1 loops=1)\n Output: uco.event_log.log_id\n Index Cond: (uco.event_log.log_id IS NOT NULL)\n Filter: ((uco.event_log.insert_time > '2013-09-15\n00:00:00'::timestamp without time zone) AND (uco.event_log.insert_time <\n'2013-09-16 00:00:00'::timestamp without time zone) AND (uco.event_log.event\n= 'S-Create'::text))\n Rows Removed by Filter: 43249789\nTotal runtime: 348370.762 ms\n\n------\nQuery B -- multiple row output, fast query, but I could get what I want from\nthe first output row\n------\nSELECT log_id FROM event_log\nWHERE event='S-Create' AND\ninsert_time>'2013-09-15' and insert_time<'2013-09-16'\nORDER BY log_id\n\nhttp://explain.depesz.com/s/s6P\nSort (cost=41015.85..41021.52 rows=11344 width=8) (actual\ntime=3651.695..3652.160 rows=6948 loops=1)\n Output: log_id\n Sort Key: event_log.log_id\n Sort Method: quicksort Memory: 518kB\n -> Bitmap Heap Scan on uco.event_log (cost=311.42..40863.05 rows=11344\nwidth=8) (actual time=448.349..3645.465 rows=6948 loops=1)\n Output: log_id\n Recheck Cond: ((event_log.event = 'S-Create'::text) AND\n(event_log.insert_time > '2013-09-15 00:00:00'::timestamp without time zone)\nAND (event_log.insert_time < '2013-09-16 00:00:00'::timestamp without time\nzone))\n -> Bitmap Index Scan on event_data_search (cost=0.00..310.86\nrows=11344 width=0) (actual time=447.670..447.670 rows=6948 loops=1)\n Index Cond: ((event_log.event = 'S-Create'::text) AND\n(event_log.insert_time > '2013-09-15 00:00:00'::timestamp without time zone)\nAND (event_log.insert_time < '2013-09-16 00:00:00'::timestamp without time\nzone))\nTotal runtime: 3652.535 ms\n\nP.S. If I put a LIMIT 1 at the end of this query, it will get an identical\nplan just like Query A.\n\n------\nMy observation:\nIn Query A, the lower bound of the INDEX SCAN node estimation is way off. It\nwon't get the first row output right at 0.00 because the filter needed to be\napplied.\n\nThanks,\nSam\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 20:23:04 +0800", "msg_from": "\"Sam Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong <[email protected]> wrote:\n>> This event_log table has 4 million rows.\n>>\n>> “log_id” is the primary key (bigint),\n>>\n>> there is a composite index “event_data_search” over (event::text,\n>> insert_time::datetime).\n>\n>\n> I think you need to add log_id to that composite index to get pg to use it.\n\nhurk: OP is two statistics misses (one of them massive that are\ncombing to gobsmack you).\n\nyour solution unfortuantely wont work: you can't combine two range\nsearches in a single index scan. it would probably work if you it\nlike this. If insert_time is a timestamp, not a timestamptz, we can\nconvert it to date to get what I think he wants (as long as his\nqueries are along date boundaries).\n\nhow about:\nCREATE INDEX ON event_log(event_id, insert_time::date, log_id);\n\nEXPLAIN ANALYZE\n SELECT * FROM event_log\n WHERE\n (event_id, insert_time::date, log_id) >= ('S-Create',\n'2013-09-15'::date, 0)\n AND event_id = 'S-Create' AND insert_time::date < '2013-09-16'::date\n ORDER BY\n event_id, insert_time::date, log_id\n LIMIT 1\n\nif insert_time is a timestamptz, we can materialize the date into the\ntable to get around that (timestamptz->date is a stable expression).\nIf date boundary handling is awkward, our best bet is probably to hack\nthe planner with a CTE. Note the above query will smoke the CTE based\none.\n\nWITH data AS\n(\n SELECT log_id FROM event_log\n WHERE event='S-Create' AND\n insert_time>'2013-09-15' and insert_time<'2013-09-16'\n)\nSELECT * from data ORDER BY log_id LIMIT 1;\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 08:29:26 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "On Wed, Sep 25, 2013 at 10:29 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire <[email protected]> wrote:\n>> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong <[email protected]> wrote:\n>>> This event_log table has 4 million rows.\n>>>\n>>> “log_id” is the primary key (bigint),\n>>>\n>>> there is a composite index “event_data_search” over (event::text,\n>>> insert_time::datetime).\n>>\n>>\n>> I think you need to add log_id to that composite index to get pg to use it.\n>\n> hurk: OP is two statistics misses (one of them massive that are\n> combing to gobsmack you).\n>\n> your solution unfortuantely wont work: you can't combine two range\n> searches in a single index scan. it would probably work if you it\n> like this. If insert_time is a timestamp, not a timestamptz, we can\n> convert it to date to get what I think he wants (as long as his\n> queries are along date boundaries).\n\n\nI was thinking an index over:\n\n(event, date_trunc('day', insert_time), log_id)\n\nAnd the query like\n\nSELECT min(log_id) FROM event_log\nWHERE event='S-Create' AND\ndate_trunc('day',insert_time) = '2013-09-15'\n\n\nThat's a regular simple range scan over the index.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 12:20:54 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "On Wed, Sep 25, 2013 at 10:20 AM, Claudio Freire <[email protected]> wrote:\n> On Wed, Sep 25, 2013 at 10:29 AM, Merlin Moncure <[email protected]> wrote:\n>> On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire <[email protected]> wrote:\n>>> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong <[email protected]> wrote:\n>>>> This event_log table has 4 million rows.\n>>>>\n>>>> “log_id” is the primary key (bigint),\n>>>>\n>>>> there is a composite index “event_data_search” over (event::text,\n>>>> insert_time::datetime).\n>>>\n>>>\n>>> I think you need to add log_id to that composite index to get pg to use it.\n>>\n>> hurk: OP is two statistics misses (one of them massive that are\n>> combing to gobsmack you).\n>>\n>> your solution unfortuantely wont work: you can't combine two range\n>> searches in a single index scan. it would probably work if you it\n>> like this. If insert_time is a timestamp, not a timestamptz, we can\n>> convert it to date to get what I think he wants (as long as his\n>> queries are along date boundaries).\n>\n>\n> I was thinking an index over:\n>\n> (event, date_trunc('day', insert_time), log_id)\n>\n> And the query like\n>\n> SELECT min(log_id) FROM event_log\n> WHERE event='S-Create' AND\n> date_trunc('day',insert_time) = '2013-09-15'\n>\n>\n> That's a regular simple range scan over the index.\n\n*) date_trunc has same problems as ::date: it is stable expression\nonly for timestamptz. also, the index will be bigger since you're\nstill indexing timestamp\n\n*) row wise comparison search might be faster and is generalized to\nreturn N records, not jut one.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 10:54:50 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Merlin\n> Moncure\n> Sent: Wednesday, September 25, 2013 23:55\n> To: Claudio Freire\n> Cc: Sam Wong; postgres performance list\n> Subject: Re: [PERFORM] Slow plan for MAX/MIN or LIMIT 1?\n> \n> On Wed, Sep 25, 2013 at 10:20 AM, Claudio Freire <[email protected]>\n> wrote:\n> > On Wed, Sep 25, 2013 at 10:29 AM, Merlin Moncure <[email protected]>\n> wrote:\n> >> On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire\n<[email protected]>\n> wrote:\n> >>> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong <[email protected]> wrote:\n> >>>> This event_log table has 4 million rows.\n> >>>>\n> >>>> \"log_id\" is the primary key (bigint),\n> >>>>\n> >>>> there is a composite index \"event_data_search\" over (event::text,\n> >>>> insert_time::datetime).\n> >>>\n> >>>\n> >>> I think you need to add log_id to that composite index to get pg to\nuse it.\n> >>\n> >> hurk: OP is two statistics misses (one of them massive that are\n> >> combing to gobsmack you).\n> >>\n> >> your solution unfortuantely wont work: you can't combine two range\n> >> searches in a single index scan. it would probably work if you it\n> >> like this. If insert_time is a timestamp, not a timestamptz, we can\n> >> convert it to date to get what I think he wants (as long as his\n> >> queries are along date boundaries).\n> >\n> >\n> > I was thinking an index over:\n> >\n> > (event, date_trunc('day', insert_time), log_id)\n> >\n> > And the query like\n> >\n> > SELECT min(log_id) FROM event_log\n> > WHERE event='S-Create' AND\n> > date_trunc('day',insert_time) = '2013-09-15'\n> >\n> >\n> > That's a regular simple range scan over the index.\n> \n> *) date_trunc has same problems as ::date: it is stable expression only\nfor\n> timestamptz. also, the index will be bigger since you're still indexing\n> timestamp\n> \n> *) row wise comparison search might be faster and is generalized to return\nN\n> records, not jut one.\n> \n> merlin\nI'm afraid it's not anything about composite index. (Because the query B\nworks fine and the plan is as expected)\nBTW, the timestamp is without timezone.\n\t\nI just thought of another way that demonstrate the issue.\nBoth query returns the same one row result. log_id is the bigint primary\nkey, event_data_search is still that indexed.\n---\nFast query\n---\nwith Q AS (\nselect event\nfrom event_log \nWHERE log_id>10000 and log_id<50000 \norder by event\n)\nSELECT * FROM Q LIMIT 1\n\nLimit (cost=2521.82..2521.83 rows=1 width=32) (actual time=88.342..88.342\nrows=1 loops=1)\n Output: q.event\n Buffers: shared hit=93 read=622\n CTE q\n -> Sort (cost=2502.07..2521.82 rows=39502 width=6) (actual\ntime=88.335..88.335 rows=1 loops=1)\n Output: event_log.event\n Sort Key: event_log.event\n Sort Method: quicksort Memory: 3486kB\n Buffers: shared hit=93 read=622\n -> Index Scan using event_log_pkey on uco.event_log\n(cost=0.00..1898.89 rows=39502 width=6) (actual time=13.918..65.573\nrows=39999 loops=1)\n Output: event_log.event\n Index Cond: ((event_log.log_id > 1010000) AND\n(event_log.log_id < 1050000))\n Buffers: shared hit=93 read=622\n -> CTE Scan on q (cost=0.00..237.01 rows=39502 width=32) (actual\ntime=88.340..88.340 rows=1 loops=1)\n Output: q.event\n Buffers: shared hit=93 read=622\nTotal runtime: 89.039 ms\n\n---\nSlow Query\n---\nResult (cost=1241.05..1241.05 rows=1 width=0) (actual\ntime=1099.532..1099.533 rows=1 loops=1)\n Output: $0\n Buffers: shared hit=49029 read=57866\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..1241.05 rows=1 width=6) (actual\ntime=1099.527..1099.527 rows=1 loops=1)\n Output: uco.event_log.event\n Buffers: shared hit=49029 read=57866\n -> Index Scan using event_data_search on uco.event_log\n(cost=0.00..49024009.79 rows=39502 width=6) (actual time=1099.523..1099.523\nrows=1 loops=1)\n Output: uco.event_log.event\n Index Cond: (uco.event_log.event IS NOT NULL)\n Filter: ((uco.event_log.log_id > 1010000) AND\n(uco.event_log.log_id < 1050000))\n Rows Removed by Filter: 303884\n Buffers: shared hit=49029 read=57866\nTotal runtime: 1099.568 ms\n(Note: Things got buffered so it goes down to 1 second, but comparing to the\nbuffer count with the query above this is a few orders slower than that)\n\n---\nThe CTE \"fast query\" works, it is completed with index scan over\n\"event_log_pkey\", which is what I am expecting, and it is good.\nBut it's much straight forward to write it as the \"slow query\", I am\nexpecting the planner would give the same optimum plan for both.\n\nFor the plan of \"Slow query\" has an estimated total cost of 1241.05, and the\n\"Fast query\" has 2521.83, \nhence the planner prefers that plan - the scanning over the\n\"event_data_search\" index plan - but this choice doesn't make sense to me.\n\nThis is my point I want to bring up, why the planner would do that? Instead\nof scanning over the \"event_log_pkey\"?\n\nLooking into the estimated cost of the Slow Query Plan, the Index Scan node\nlower bound estimation is 0.00. IIRC, it is saying the planner estimated\nthat the first result row could be returned at 0.00.\nbut actually it has to do a scan almost the whole index and table scan to\ncheck if the log_id are within the condition range, there is no way that the\nfirst row could ever be returned at 0.00.\n(The event is a text column with cardinality at around 20, and the order is\nappearing randomly all over the table - 0 correlation)\nHence this questionable estimation bubble up along the tree, the Limit node\nthought that it could finish within 1241.05 (once the first row is\nobtained), and the whole plan is thought to be finisable within 1241.05 -\nwhich is not the case.\n\nSam\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Sep 2013 00:33:49 +0800", "msg_from": "\"Sam Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Sam Wong\n> Sent: Thursday, September 26, 2013 0:34\n> To: 'postgres performance list'\n> Subject: Re: [PERFORM] Slow plan for MAX/MIN or LIMIT 1?\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf Of Merlin\n> > Moncure\n> > Sent: Wednesday, September 25, 2013 23:55\n> > To: Claudio Freire\n> > Cc: Sam Wong; postgres performance list\n> > Subject: Re: [PERFORM] Slow plan for MAX/MIN or LIMIT 1?\n> >\n> > On Wed, Sep 25, 2013 at 10:20 AM, Claudio Freire\n> > <[email protected]>\n> > wrote:\n> > > On Wed, Sep 25, 2013 at 10:29 AM, Merlin Moncure\n> > > <[email protected]>\n> > wrote:\n> > >> On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire\n> <[email protected]>\n> > wrote:\n> > >>> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong <[email protected]>\n> wrote:\n> > >>>> This event_log table has 4 million rows.\n> > >>>>\n> > >>>> \"log_id\" is the primary key (bigint),\n> > >>>>\n> > >>>> there is a composite index \"event_data_search\" over (event::text,\n> > >>>> insert_time::datetime).\n> > >>>\n> > >>>\n> > >>> I think you need to add log_id to that composite index to get pg\n> > >>> to\n> use it.\n> > >>\n> > >> hurk: OP is two statistics misses (one of them massive that are\n> > >> combing to gobsmack you).\n> > >>\n> > >> your solution unfortuantely wont work: you can't combine two range\n> > >> searches in a single index scan. it would probably work if you it\n> > >> like this. If insert_time is a timestamp, not a timestamptz, we\n> > >> can convert it to date to get what I think he wants (as long as his\n> > >> queries are along date boundaries).\n> > >\n> > >\n> > > I was thinking an index over:\n> > >\n> > > (event, date_trunc('day', insert_time), log_id)\n> > >\n> > > And the query like\n> > >\n> > > SELECT min(log_id) FROM event_log\n> > > WHERE event='S-Create' AND\n> > > date_trunc('day',insert_time) = '2013-09-15'\n> > >\n> > >\n> > > That's a regular simple range scan over the index.\n> >\n> > *) date_trunc has same problems as ::date: it is stable expression\n> > only\n> for\n> > timestamptz. also, the index will be bigger since you're still\n> > indexing timestamp\n> >\n> > *) row wise comparison search might be faster and is generalized to\n> > return\n> N\n> > records, not jut one.\n> >\n> > merlin\n> I'm afraid it's not anything about composite index. (Because the query B\nworks\n> fine and the plan is as expected) BTW, the timestamp is without timezone.\n> \n> I just thought of another way that demonstrate the issue.\n> Both query returns the same one row result. log_id is the bigint primary\nkey,\n> event_data_search is still that indexed.\n> ---\n> Fast query\n> ---\nExcuse me, this is the actual query.\n\nwith Q AS (\nselect event\nfrom event_log \nWHERE log_id>1010000 and log_id<1050000 \norder by event\n)\nSELECT * FROM Q LIMIT 1\n> \n> Limit (cost=2521.82..2521.83 rows=1 width=32) (actual time=88.342..88.342\n> rows=1 loops=1)\n> Output: q.event\n> Buffers: shared hit=93 read=622\n> CTE q\n> -> Sort (cost=2502.07..2521.82 rows=39502 width=6) (actual\n> time=88.335..88.335 rows=1 loops=1)\n> Output: event_log.event\n> Sort Key: event_log.event\n> Sort Method: quicksort Memory: 3486kB\n> Buffers: shared hit=93 read=622\n> -> Index Scan using event_log_pkey on uco.event_log\n> (cost=0.00..1898.89 rows=39502 width=6) (actual time=13.918..65.573\n> rows=39999 loops=1)\n> Output: event_log.event\n> Index Cond: ((event_log.log_id > 1010000) AND\n> (event_log.log_id < 1050000))\n> Buffers: shared hit=93 read=622\n> -> CTE Scan on q (cost=0.00..237.01 rows=39502 width=32) (actual\n> time=88.340..88.340 rows=1 loops=1)\n> Output: q.event\n> Buffers: shared hit=93 read=622\n> Total runtime: 89.039 ms\n> \n> ---\n> Slow Query\n> ---\nAnd the query for this...I must have forgot to paste.\n\nselect min(event)\nfrom event_log \nWHERE log_id>1010000 and log_id<1050000\n> Result (cost=1241.05..1241.05 rows=1 width=0) (actual\n> time=1099.532..1099.533 rows=1 loops=1)\n> Output: $0\n> Buffers: shared hit=49029 read=57866\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..1241.05 rows=1 width=6) (actual\n> time=1099.527..1099.527 rows=1 loops=1)\n> Output: uco.event_log.event\n> Buffers: shared hit=49029 read=57866\n> -> Index Scan using event_data_search on uco.event_log\n> (cost=0.00..49024009.79 rows=39502 width=6) (actual\n> time=1099.523..1099.523\n> rows=1 loops=1)\n> Output: uco.event_log.event\n> Index Cond: (uco.event_log.event IS NOT NULL)\n> Filter: ((uco.event_log.log_id > 1010000) AND\n> (uco.event_log.log_id < 1050000))\n> Rows Removed by Filter: 303884\n> Buffers: shared hit=49029 read=57866 Total runtime:\n> 1099.568 ms\n> (Note: Things got buffered so it goes down to 1 second, but comparing to\nthe\n> buffer count with the query above this is a few orders slower than that)\n> \n> ---\n> The CTE \"fast query\" works, it is completed with index scan over\n> \"event_log_pkey\", which is what I am expecting, and it is good.\n> But it's much straight forward to write it as the \"slow query\", I am\nexpecting\n> the planner would give the same optimum plan for both.\n> \n> For the plan of \"Slow query\" has an estimated total cost of 1241.05, and\nthe\n> \"Fast query\" has 2521.83, hence the planner prefers that plan - the\nscanning\n> over the \"event_data_search\" index plan - but this choice doesn't make\nsense\n> to me.\n> \n> This is my point I want to bring up, why the planner would do that?\nInstead of\n> scanning over the \"event_log_pkey\"?\n> \n> Looking into the estimated cost of the Slow Query Plan, the Index Scan\nnode\n> lower bound estimation is 0.00. IIRC, it is saying the planner estimated\nthat the\n> first result row could be returned at 0.00.\n> but actually it has to do a scan almost the whole index and table scan to\ncheck\n> if the log_id are within the condition range, there is no way that the\nfirst row\n> could ever be returned at 0.00.\n> (The event is a text column with cardinality at around 20, and the order\nis\n> appearing randomly all over the table - 0 correlation) Hence this\nquestionable\n> estimation bubble up along the tree, the Limit node thought that it could\nfinish\n> within 1241.05 (once the first row is obtained), and the whole plan is\nthought to\n> be finisable within 1241.05 - which is not the case.\n> \n> Sam\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Sep 2013 00:42:31 +0800", "msg_from": "\"Sam Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "On Wed, Sep 25, 2013 at 12:54 PM, Merlin Moncure <[email protected]> wrote:\n>> I was thinking an index over:\n>>\n>> (event, date_trunc('day', insert_time), log_id)\n>>\n>> And the query like\n>>\n>> SELECT min(log_id) FROM event_log\n>> WHERE event='S-Create' AND\n>> date_trunc('day',insert_time) = '2013-09-15'\n>>\n>>\n>> That's a regular simple range scan over the index.\n>\n> *) date_trunc has same problems as ::date: it is stable expression\n> only for timestamptz. also, the index will be bigger since you're\n> still indexing timestamp\n\n\nAh, yes, good point.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 14:05:09 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" }, { "msg_contents": "Hi\n\n\nOn Wed, Sep 25, 2013 at 6:42 PM, Sam Wong <[email protected]> wrote:\n\n>\n> > ---\n> Excuse me, this is the actual query.\n>\n> with Q AS (\n> select event\n> from event_log\n> WHERE log_id>1010000 and log_id<1050000\n> order by event\n> )\n> SELECT * FROM Q LIMIT 1\n> >\n> > Limit (cost=2521.82..2521.83 rows=1 width=32) (actual\n> time=88.342..88.342\n> > rows=1 loops=1)\n> > Output: q.event\n> > Buffers: shared hit=93 read=622\n> > CTE q\n> > -> Sort (cost=2502.07..2521.82 rows=39502 width=6) (actual\n> > time=88.335..88.335 rows=1 loops=1)\n> > Output: event_log.event\n> > Sort Key: event_log.event\n> > Sort Method: quicksort Memory: 3486kB\n> > Buffers: shared hit=93 read=622\n> > -> Index Scan using event_log_pkey on uco.event_log\n> > (cost=0.00..1898.89 rows=39502 width=6) (actual time=13.918..65.573\n> > rows=39999 loops=1)\n> > Output: event_log.event\n> > Index Cond: ((event_log.log_id > 1010000) AND\n> > (event_log.log_id < 1050000))\n> > Buffers: shared hit=93 read=622\n> > -> CTE Scan on q (cost=0.00..237.01 rows=39502 width=32) (actual\n> > time=88.340..88.340 rows=1 loops=1)\n> > Output: q.event\n> > Buffers: shared hit=93 read=622\n> > Total runtime: 89.039 ms\n> >\n> > ---\n> > Slow Query\n> select min(event)\n> from event_log\n> WHERE log_id>1010000 and log_id<1050000\n> > Result (cost=1241.05..1241.05 rows=1 width=0) (actual\n> > time=1099.532..1099.533 rows=1 loops=1)\n> > Output: $0\n> > Buffers: shared hit=49029 read=57866\n> > InitPlan 1 (returns $0)\n> > -> Limit (cost=0.00..1241.05 rows=1 width=6) (actual\n> > time=1099.527..1099.527 rows=1 loops=1)\n> > Output: uco.event_log.event\n> > Buffers: shared hit=49029 read=57866\n> > -> Index Scan using event_data_search on uco.event_log\n> > (cost=0.00..49024009.79 rows=39502 width=6) (actual\n> > time=1099.523..1099.523\n> > rows=1 loops=1)\n> > Output: uco.event_log.event\n> > Index Cond: (uco.event_log.event IS NOT NULL)\n> > Filter: ((uco.event_log.log_id > 1010000) AND\n> > (uco.event_log.log_id < 1050000))\n> > Rows Removed by Filter: 303884\n> > Buffers: shared hit=49029 read=57866 Total runtime:\n> > 1099.568 ms\n> > (Note: Things got buffered so it goes down to 1 second, but comparing to\n> the\n> > buffer count with the query above this is a few orders slower than that)\n> >\n> > ---\n> > The CTE \"fast query\" works, it is completed with index scan over\n> > \"event_log_pkey\", which is what I am expecting, and it is good.\n> > But it's much straight forward to write it as the \"slow query\", I am\n> expecting\n> > the planner would give the same optimum plan for both.\n> >\n> > For the plan of \"Slow query\" has an estimated total cost of 1241.05, and\n> the\n> > \"Fast query\" has 2521.83, hence the planner prefers that plan - the\n> scanning\n> > over the \"event_data_search\" index plan - but this choice doesn't make\n> sense\n> > to me.\n> >\n> > This is my point I want to bring up, why the planner would do that?\n> Instead of\n> > scanning over the \"event_log_pkey\"?\n> >\n>\n Maybe there's a bug but I don't think so, Postgresql optimizer is strongly\nbias toward uncorrelated data but in your case log_id and event are highly\ncorrelated, right?\n\nWith your example it has to chose between:\n1- play safe and use event_log_pkey, scan 39502 rows and select the\nsmallest event.\n\n2- use event_data_search, 4 000 000 rows, 40 000 with a log_id in the\ninterval thus win big and find one in the first 100 scanned rows or lose\nbig and scan 4 000 000 rows.\n\nWith uncorrelated data 2- is 400 time faster than 1-, 100 rows vs 40 000.\n\nPostgresql is a high stake gambler :)\n\nDidier\n\nHiOn Wed, Sep 25, 2013 at 6:42 PM, Sam Wong <[email protected]> wrote:\n\n> ---\nExcuse me, this is the actual query.\n\nwith Q AS (\nselect event\nfrom event_log\nWHERE log_id>1010000 and log_id<1050000\norder by event\n)\nSELECT * FROM Q LIMIT 1\n>\n> Limit  (cost=2521.82..2521.83 rows=1 width=32) (actual time=88.342..88.342\n> rows=1 loops=1)\n>   Output: q.event\n>   Buffers: shared hit=93 read=622\n>   CTE q\n>     ->  Sort  (cost=2502.07..2521.82 rows=39502 width=6) (actual\n> time=88.335..88.335 rows=1 loops=1)\n>           Output: event_log.event\n>           Sort Key: event_log.event\n>           Sort Method: quicksort  Memory: 3486kB\n>           Buffers: shared hit=93 read=622\n>           ->  Index Scan using event_log_pkey on uco.event_log\n> (cost=0.00..1898.89 rows=39502 width=6) (actual time=13.918..65.573\n> rows=39999 loops=1)\n>                 Output: event_log.event\n>                 Index Cond: ((event_log.log_id > 1010000) AND\n> (event_log.log_id < 1050000))\n>                 Buffers: shared hit=93 read=622\n>   ->  CTE Scan on q  (cost=0.00..237.01 rows=39502 width=32) (actual\n> time=88.340..88.340 rows=1 loops=1)\n>         Output: q.event\n>         Buffers: shared hit=93 read=622\n> Total runtime: 89.039 ms\n>\n> ---\n> Slow Query\nselect min(event)\nfrom event_log\nWHERE log_id>1010000 and log_id<1050000\n> Result  (cost=1241.05..1241.05 rows=1 width=0) (actual\n> time=1099.532..1099.533 rows=1 loops=1)\n>   Output: $0\n>   Buffers: shared hit=49029 read=57866\n>   InitPlan 1 (returns $0)\n>     ->  Limit  (cost=0.00..1241.05 rows=1 width=6) (actual\n> time=1099.527..1099.527 rows=1 loops=1)\n>           Output: uco.event_log.event\n>           Buffers: shared hit=49029 read=57866\n>           ->  Index Scan using event_data_search on uco.event_log\n> (cost=0.00..49024009.79 rows=39502 width=6) (actual\n> time=1099.523..1099.523\n> rows=1 loops=1)\n>                 Output: uco.event_log.event\n>                 Index Cond: (uco.event_log.event IS NOT NULL)\n>                 Filter: ((uco.event_log.log_id > 1010000) AND\n> (uco.event_log.log_id < 1050000))\n>                 Rows Removed by Filter: 303884\n>                 Buffers: shared hit=49029 read=57866 Total runtime:\n> 1099.568 ms\n> (Note: Things got buffered so it goes down to 1 second, but comparing to\nthe\n> buffer count with the query above this is a few orders slower than that)\n>\n> ---\n> The CTE \"fast query\" works, it is completed with index scan over\n> \"event_log_pkey\", which is what I am expecting, and it is good.\n> But it's much straight forward to write it as the \"slow query\", I am\nexpecting\n> the planner would give the same optimum plan for both.\n>\n> For the plan of \"Slow query\" has an estimated total cost of 1241.05, and\nthe\n> \"Fast query\" has 2521.83, hence the planner prefers that plan - the\nscanning\n> over the \"event_data_search\" index plan - but this choice doesn't make\nsense\n> to me.\n>\n> This is my point I want to bring up, why the planner would do that?\nInstead of\n> scanning over the \"event_log_pkey\"?\n> Maybe there's a bug but I don't think so, Postgresql optimizer is strongly bias toward uncorrelated data but in your case log_id and event are highly correlated, right?\nWith your example it has to chose between:1- play safe and use  event_log_pkey, scan 39502 rows and select the smallest event.2- use event_data_search, 4 000 000 rows, 40 000 with a log_id in the interval thus win big and find one in the first 100 scanned rows or lose big and scan 4 000 000 rows.\nWith uncorrelated data 2- is 400 time faster than 1-, 100 rows vs 40 000.Postgresql is a high stake gambler :)Didier", "msg_date": "Mon, 30 Sep 2013 08:11:05 +0200", "msg_from": "didier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow plan for MAX/MIN or LIMIT 1?" } ]
[ { "msg_contents": "Hi all,\n\nBringing up new slaves is a task we are performing very frequently. Our\ncurrent process is:\n1. launch a new instance, perform a pg_basebackup\n2. start postgresql on the slave and let the slave recover from the archive\nall the xlog files it missed and that got created during step 1.\n3. let the slave stream from the master\n\nWe use pg_basebackup with the -X option to stream the wal files created and\nget rid of step 2.\n\nHowever, when the postgresql server starts on the slave, it first tries to\nrecover the files from our archive even if the files have been copied to\nthe pg_xlog directory. Recovering from the archive is slow and seem to be\nunnecessary given that the xlogs have been copied.\n\nTo address that problem we wrote a little shell script that changes that\norder. The script let the slave make a first attempt at using the local\nxlog file if it exists, and falls back to recovering from the archive in\ncase of failure.\n\nWhile this seems to be a common use case, we could not find much info about\nthis.\n- Are we missing something?\n- What are the trade-offs of such approach?\n- Is there any additional risks of bringing up new slave with that recovery\noption?\n\nThanks a lot!\n\nFrancois\n\nHi all,Bringing up new slaves is a task we are performing very frequently.  Our current process is: 1. launch a new instance, perform a pg_basebackup2. start postgresql on the slave and let the slave recover from the archive all the xlog files it missed and that got created during step 1.\n\n3. let the slave stream from the masterWe use pg_basebackup with the -X option to stream the wal files created and get rid of step 2.  However, when the postgresql server starts on the slave, it first tries to recover the files from our archive even if the files have been copied to the pg_xlog directory.   Recovering from the archive is slow and seem to be unnecessary given that the xlogs have been copied.  \nTo address that problem we wrote a little shell script that changes that order.  The script let the slave make a first attempt at using the local xlog file if it exists, and falls back to recovering from the archive in case of failure.\nWhile this seems to be a common use case, we could not find much info about this.  - Are we missing something? - What are the trade-offs of such approach?  - Is there any additional risks of bringing up new slave with that recovery option?\nThanks a lot!Francois", "msg_date": "Tue, 24 Sep 2013 11:52:58 -0700", "msg_from": "=?ISO-8859-1?Q?Fran=E7ois_Deli=E8ge?= <[email protected]>", "msg_from_op": true, "msg_subject": "Bringing up new slaves faster" } ]
[ { "msg_contents": "Hi,\n\nI have a table with zip_code and latitude and longitude.\n\n\\d zip_code_based_lng_lat\n Table \"public.zip_code_based_lng_lat\"\n Column | Type | Modifiers\n--------+------------------------+-----------\n zip | character varying(100) |\n state | character varying(100) |\n city | character varying(100) |\n type | character varying(100) |\n lat | character varying(100) |\n lng | character varying(100) |\nIndexes:\n \"zip_code_based_lng_lat_zipidx\" btree (zip)\n\nI need to find the closest distance using the radius formula using a\nzip_code provided by user.\n\nI build the query like:\n\nselect *,\nearth_distance(q2_c1, q1.c1) as d\nfrom\n(\nselect *, ll_to_earth(lat::float,lng::float) as c1 from\nzip_code_based_lng_lat\n) as q1,\n(\nselect ll_to_earth(lat::float,lng::float) q2_c1 from zip_code_based_lng_lat\nwhere zip='18938'\n) as q2\norder by d\nlimit 10\n\n\n Limit (cost=216010.21..216010.24 rows=10 width=55) (actual\ntime=38296.185..38296.191 rows=10 loops=1)\n -> Sort (cost=216010.21..216415.74 rows=162212 width=55) (actual\ntime=38296.182..38296.182 rows=10 loops=1)\n Sort Key:\n(sec_to_gc(cube_distance((ll_to_earth((public.zip_code_based_lng_lat.lat)::double\nprecision, (public.zip_code_based_lng_lat.lng)::double precision))::cube,\n(ll\n_to_earth((public.zip_code_based_lng_lat.lat)::double precision,\n(public.zip_code_based_lng_lat.lng)::double precision))::cube)))\n Sort Method: top-N heapsort Memory: 27kB\n -> Nested Loop (cost=0.00..212504.87 rows=162212 width=55)\n(actual time=3.244..38052.444 rows=81106 loops=1)\n -> Seq Scan on zip_code_based_lng_lat (cost=0.00..817.90\nrows=81106 width=38) (actual time=0.025..50.669 rows=81106 loops=1)\n -> Materialize (cost=0.00..0.32 rows=2 width=17) (actual\ntime=0.000..0.001 rows=1 loops=81106)\n -> Index Scan using zip_code_based_lng_lat_zipidx on\nzip_code_based_lng_lat (cost=0.00..0.31 rows=2 width=17) (actual\ntime=0.080..0.084 rows=1 loops=1)\n Index Cond: ((zip)::text = '18938'::text)\n Total runtime: 38296.360 ms\n\n\nThe result is fine. But it is too slow.\nI am using Postgresql 9.2 with following parameters:\n\nshared_buffers = 6GB\nwork_mem = 500 MB\nseq_page_cost = 0.01\nrandom_page_cost = 0.01\n\nAny idea to improve it.\n\nThanks.\n\nHi,I have a table with zip_code and latitude and longitude. \\d zip_code_based_lng_lat    Table \"public.zip_code_based_lng_lat\"\n Column |          Type          | Modifiers --------+------------------------+----------- zip    | character varying(100) |  state  | character varying(100) |  city   | character varying(100) | \n type   | character varying(100) |  lat    | character varying(100) |  lng    | character varying(100) | Indexes:    \"zip_code_based_lng_lat_zipidx\" btree (zip)\nI need to find the closest distance using the radius formula using a zip_code provided by user.I build the query like:select *,\nearth_distance(q2_c1, q1.c1) as dfrom(select *, ll_to_earth(lat::float,lng::float)  as c1  from zip_code_based_lng_lat) as q1,(select ll_to_earth(lat::float,lng::float) q2_c1 from zip_code_based_lng_lat where zip='18938'\n) as q2order by dlimit 10 Limit  (cost=216010.21..216010.24 rows=10 width=55) (actual time=38296.185..38296.191 rows=10 loops=1)   ->  Sort  (cost=216010.21..216415.74 rows=162212 width=55) (actual time=38296.182..38296.182 rows=10 loops=1)\n         Sort Key: (sec_to_gc(cube_distance((ll_to_earth((public.zip_code_based_lng_lat.lat)::double precision, (public.zip_code_based_lng_lat.lng)::double precision))::cube, (ll_to_earth((public.zip_code_based_lng_lat.lat)::double precision, (public.zip_code_based_lng_lat.lng)::double precision))::cube)))\n         Sort Method: top-N heapsort  Memory: 27kB         ->  Nested Loop  (cost=0.00..212504.87 rows=162212 width=55) (actual time=3.244..38052.444 rows=81106 loops=1)               ->  Seq Scan on zip_code_based_lng_lat  (cost=0.00..817.90 rows=81106 width=38) (actual time=0.025..50.669 rows=81106 loops=1)\n               ->  Materialize  (cost=0.00..0.32 rows=2 width=17) (actual time=0.000..0.001 rows=1 loops=81106)                     ->  Index Scan using zip_code_based_lng_lat_zipidx on zip_code_based_lng_lat  (cost=0.00..0.31 rows=2 width=17) (actual time=0.080..0.084 rows=1 loops=1)\n                           Index Cond: ((zip)::text = '18938'::text) Total runtime: 38296.360 msThe result is fine. But it is too slow. I am using Postgresql 9.2 with following parameters:\nshared_buffers = 6GBwork_mem = 500 MBseq_page_cost = 0.01random_page_cost = 0.01Any idea to improve it.Thanks.", "msg_date": "Wed, 25 Sep 2013 11:05:08 -0400", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "earthdistance query performance" }, { "msg_contents": "On Wed, Sep 25, 2013 at 10:05 AM, AI Rumman <[email protected]> wrote:\n> Hi,\n>\n> I have a table with zip_code and latitude and longitude.\n>\n> \\d zip_code_based_lng_lat\n> Table \"public.zip_code_based_lng_lat\"\n> Column | Type | Modifiers\n> --------+------------------------+-----------\n> zip | character varying(100) |\n> state | character varying(100) |\n> city | character varying(100) |\n> type | character varying(100) |\n> lat | character varying(100) |\n> lng | character varying(100) |\n> Indexes:\n> \"zip_code_based_lng_lat_zipidx\" btree (zip)\n>\n> I need to find the closest distance using the radius formula using a\n> zip_code provided by user.\n>\n> I build the query like:\n>\n> select *,\n> earth_distance(q2_c1, q1.c1) as d\n> from\n> (\n> select *, ll_to_earth(lat::float,lng::float) as c1 from\n> zip_code_based_lng_lat\n> ) as q1,\n> (\n> select ll_to_earth(lat::float,lng::float) q2_c1 from zip_code_based_lng_lat\n> where zip='18938'\n> ) as q2\n> order by d\n> limit 10\n>\n>\n> Limit (cost=216010.21..216010.24 rows=10 width=55) (actual\n> time=38296.185..38296.191 rows=10 loops=1)\n> -> Sort (cost=216010.21..216415.74 rows=162212 width=55) (actual\n> time=38296.182..38296.182 rows=10 loops=1)\n> Sort Key:\n> (sec_to_gc(cube_distance((ll_to_earth((public.zip_code_based_lng_lat.lat)::double\n> precision, (public.zip_code_based_lng_lat.lng)::double precision))::cube,\n> (ll\n> _to_earth((public.zip_code_based_lng_lat.lat)::double precision,\n> (public.zip_code_based_lng_lat.lng)::double precision))::cube)))\n> Sort Method: top-N heapsort Memory: 27kB\n> -> Nested Loop (cost=0.00..212504.87 rows=162212 width=55)\n> (actual time=3.244..38052.444 rows=81106 loops=1)\n> -> Seq Scan on zip_code_based_lng_lat (cost=0.00..817.90\n> rows=81106 width=38) (actual time=0.025..50.669 rows=81106 loops=1)\n> -> Materialize (cost=0.00..0.32 rows=2 width=17) (actual\n> time=0.000..0.001 rows=1 loops=81106)\n> -> Index Scan using zip_code_based_lng_lat_zipidx on\n> zip_code_based_lng_lat (cost=0.00..0.31 rows=2 width=17) (actual\n> time=0.080..0.084 rows=1 loops=1)\n> Index Cond: ((zip)::text = '18938'::text)\n> Total runtime: 38296.360 ms\n\nyour problem is the sort is happening before the limit. you need to\nreconfigure your query so that's compatible with nearest neighbor\nsearch (which was introduced with 9.1).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 11:25:26 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: earthdistance query performance" } ]
[ { "msg_contents": "I spent about a week optimizing a query in our performance-testing environment, which has hardware similar to production.\n\nI was able to refactor the query and reduce the runtime from hours to about 40 seconds, through the use of CTEs and a couple of new indexes.\n\nThe database was rebuilt and refreshed with the very similar data from production, but now the query takes hours again.\n\nIn the query plan, it is clear that the row count estimates are WAY too low, even though the statistics are up to date. Here's a sample query plan:\n\nCTE Scan on stef (cost=164.98..165.00 rows=1 width=38)\n CTE terms\n -> Nested Loop (cost=0.00..62.40 rows=1 width=12)\n -> Index Scan using term_idx1 on term t (cost=0.00..52.35 rows=1 width=12)\n Index Cond: (partner_id = 497)\n Filter: (recalculate_district_averages_yn AND (NOT is_deleted_yn))\n -> Index Scan using growth_measurement_window_fk1 on growth_measurement_window gw (cost=0.00..10.04 rows=1 width=4)\n Index Cond: (term_id = t.term_id)\n Filter: (test_window_complete_yn AND (NOT is_deleted_yn) AND ((growth_window_type)::text = 'DISTRICT'::text))\n CTE stef\n -> Nested Loop (cost=0.00..102.58 rows=1 width=29)\n Join Filter: ((ssef.student_id = terf.student_id) AND (ssef.grade_id = terf.grade_id))\n -> Nested Loop (cost=0.00..18.80 rows=3 width=28)\n -> CTE Scan on terms t (cost=0.00..0.02 rows=1 width=8)\n -> Index Scan using student_school_enrollment_fact_idx2 on student_school_enrollment_fact ssef (cost=0.00..18.74 rows=3 width=20)\n Index Cond: ((partner_id = t.partner_id) AND (term_id = t.term_id))\n Filter: primary_yn\n -> Index Scan using test_event_result_fact_idx3 on test_event_result_fact terf (cost=0.00..27.85 rows=4 width=25)\n Index Cond: ((partner_id = t.partner_id) AND (term_id = t.term_id))\n Filter: growth_event_yn\n\nThe estimates in the first CTE are correct, but in the second, the scan on student_school_enrollment_fact will return about 1.5 million rows, and the scan on test_event_result_fact actually returns about 1.1 million. The top level join should return about 900K rows. I believe the fundamental issue is that the CTE stef outer nested loop should be a merge join instead, but I cannot figure out why the optimizer is estimating one row when it has the statistics to correctly estimate the count.\n\nWhat would cause PG to so badly estimate the row counts?\n\nI've already regenerated the indexes and re-analyzed the tables involved.\n\nWhat else can I do to find out why it's running so slowly?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 Sep 2013 15:58:14 +0000", "msg_from": "Jim Garrison <[email protected]>", "msg_from_op": true, "msg_subject": "Troubleshooting query performance issues" }, { "msg_contents": "On Wed, Sep 25, 2013 at 8:58 AM, Jim Garrison <[email protected]> wrote:\n\n> I spent about a week optimizing a query in our performance-testing\n> environment, which has hardware similar to production.\n>\n> I was able to refactor the query and reduce the runtime from hours to\n> about 40 seconds, through the use of CTEs and a couple of new indexes.\n>\n> The database was rebuilt and refreshed with the very similar data from\n> production, but now the query takes hours again.\n>\n> In the query plan, it is clear that the row count estimates are WAY too\n> low, even though the statistics are up to date. Here's a sample query plan:\n>\n> CTE Scan on stef (cost=164.98..165.00 rows=1 width=38)\n> CTE terms\n> -> Nested Loop (cost=0.00..62.40 rows=1 width=12)\n> -> Index Scan using term_idx1 on term t (cost=0.00..52.35\n> rows=1 width=12)\n> Index Cond: (partner_id = 497)\n> Filter: (recalculate_district_averages_yn AND (NOT\n> is_deleted_yn))\n> -> Index Scan using growth_measurement_window_fk1 on\n> growth_measurement_window gw (cost=0.00..10.04 rows=1 width=4)\n> Index Cond: (term_id = t.term_id)\n> Filter: (test_window_complete_yn AND (NOT is_deleted_yn)\n> AND ((growth_window_type)::text = 'DISTRICT'::text))\n> CTE stef\n> -> Nested Loop (cost=0.00..102.58 rows=1 width=29)\n> Join Filter: ((ssef.student_id = terf.student_id) AND\n> (ssef.grade_id = terf.grade_id))\n> -> Nested Loop (cost=0.00..18.80 rows=3 width=28)\n> -> CTE Scan on terms t (cost=0.00..0.02 rows=1 width=8)\n> -> Index Scan using student_school_enrollment_fact_idx2\n> on student_school_enrollment_fact ssef (cost=0.00..18.74 rows=3 width=20)\n> Index Cond: ((partner_id = t.partner_id) AND\n> (term_id = t.term_id))\n> Filter: primary_yn\n> -> Index Scan using test_event_result_fact_idx3 on\n> test_event_result_fact terf (cost=0.00..27.85 rows=4 width=25)\n> Index Cond: ((partner_id = t.partner_id) AND (term_id =\n> t.term_id))\n> Filter: growth_event_yn\n>\n> The estimates in the first CTE are correct, but in the second, the scan on\n> student_school_enrollment_fact will return about 1.5 million rows, and the\n> scan on test_event_result_fact actually returns about 1.1 million. The top\n> level join should return about 900K rows. I believe the fundamental issue\n> is that the CTE stef outer nested loop should be a merge join instead, but\n> I cannot figure out why the optimizer is estimating one row when it has the\n> statistics to correctly estimate the count.\n>\n> What would cause PG to so badly estimate the row counts?\n>\n> I've already regenerated the indexes and re-analyzed the tables involved.\n>\n> What else can I do to find out why it's running so slowly?\n>\n>\nMore details about the environment would probably be helpful:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\nAre you able to swap out the CTE for a temp table and index that (+analyze)\nto compare against the CTE version?\n\nOn Wed, Sep 25, 2013 at 8:58 AM, Jim Garrison <[email protected]> wrote:\nI spent about a week optimizing a query in our performance-testing environment, which has hardware similar to production.\n\nI was able to refactor the query and reduce the runtime from hours to about 40 seconds, through the use of CTEs and a couple of new indexes.\n\nThe database was rebuilt and refreshed with the very similar data from production, but now the query takes hours again.\n\nIn the query plan, it is clear that the row count estimates are WAY too low, even though the statistics are up to date.  Here's a sample query plan:\n\nCTE Scan on stef  (cost=164.98..165.00 rows=1 width=38)\n  CTE terms\n    ->  Nested Loop  (cost=0.00..62.40 rows=1 width=12)\n          ->  Index Scan using term_idx1 on term t  (cost=0.00..52.35 rows=1 width=12)\n                Index Cond: (partner_id = 497)\n                Filter: (recalculate_district_averages_yn AND (NOT is_deleted_yn))\n          ->  Index Scan using growth_measurement_window_fk1 on growth_measurement_window gw  (cost=0.00..10.04 rows=1 width=4)\n                Index Cond: (term_id = t.term_id)\n                Filter: (test_window_complete_yn AND (NOT is_deleted_yn) AND ((growth_window_type)::text = 'DISTRICT'::text))\n  CTE stef\n    ->  Nested Loop  (cost=0.00..102.58 rows=1 width=29)\n          Join Filter: ((ssef.student_id = terf.student_id) AND (ssef.grade_id = terf.grade_id))\n          ->  Nested Loop  (cost=0.00..18.80 rows=3 width=28)\n                ->  CTE Scan on terms t  (cost=0.00..0.02 rows=1 width=8)\n                ->  Index Scan using student_school_enrollment_fact_idx2 on student_school_enrollment_fact ssef  (cost=0.00..18.74 rows=3 width=20)\n                      Index Cond: ((partner_id = t.partner_id) AND (term_id = t.term_id))\n                      Filter: primary_yn\n          ->  Index Scan using test_event_result_fact_idx3 on test_event_result_fact terf  (cost=0.00..27.85 rows=4 width=25)\n                Index Cond: ((partner_id = t.partner_id) AND (term_id = t.term_id))\n                Filter: growth_event_yn\n\nThe estimates in the first CTE are correct, but in the second, the scan on student_school_enrollment_fact will return about 1.5 million rows, and the scan on test_event_result_fact actually returns about 1.1 million.  The top level join should return about 900K rows.  I believe the fundamental issue is that the CTE stef outer nested loop should be a merge join instead, but I cannot figure out why the optimizer is estimating one row when it has the statistics to correctly estimate the count.\n\nWhat would cause PG to so badly estimate the row counts?\n\nI've already regenerated the indexes and re-analyzed the tables involved.\n\nWhat else can I do to find out why it's running so slowly?\nMore details about the environment would probably be helpful: https://wiki.postgresql.org/wiki/Slow_Query_Questions\nAre you able to swap out the CTE for a temp table and index that (+analyze) to compare against the CTE version?", "msg_date": "Wed, 25 Sep 2013 09:30:30 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Troubleshooting query performance issues" } ]
[ { "msg_contents": "We have traced this to the *addition* of a two-column index. \n\nThe two tables in question both have single-column indexes on two foreign keys, say columns A and B. The query joins the two large tables on A and B. \n\nWith only the two indexes, the query plan does a bitmap AND on the index scan results and performance is stable.\n\nI added an index on (A,B), and this caused the planner to use the new index, but I was never able to get the query to complete. In one instance I let it run 18 hours. \n\nThe onlly difference was the addition of the index\n\nSummary:\n\n- With index on (A,B) -- query time is \"infinite\" \n\n- Without index on (A,B), relying on individual indexes and bitmap AND -- query time is about 4 minutes (as expected given the data volume)\n\nDoes this sound like a bug in the query planner?\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Jim Garrison\n> Sent: Wednesday, September 25, 2013 8:58 AM\n> To: [email protected]\n> Subject: [PERFORM] Troubleshooting query performance issues\n> \n> I spent about a week optimizing a query in our performance-testing\n> environment, which has hardware similar to production.\n> \n> I was able to refactor the query and reduce the runtime from hours to about\n> 40 seconds, through the use of CTEs and a couple of new indexes.\n> \n> The database was rebuilt and refreshed with the very similar data from\n> production, but now the query takes hours again.\n> \n> In the query plan, it is clear that the row count estimates are WAY too low,\n> even though the statistics are up to date. Here's a sample query plan:\n> \n> CTE Scan on stef (cost=164.98..165.00 rows=1 width=38)\n> CTE terms\n> -> Nested Loop (cost=0.00..62.40 rows=1 width=12)\n> -> Index Scan using term_idx1 on term t (cost=0.00..52.35 rows=1\n> width=12)\n> Index Cond: (partner_id = 497)\n> Filter: (recalculate_district_averages_yn AND (NOT is_deleted_yn))\n> -> Index Scan using growth_measurement_window_fk1 on\n> growth_measurement_window gw (cost=0.00..10.04 rows=1 width=4)\n> Index Cond: (term_id = t.term_id)\n> Filter: (test_window_complete_yn AND (NOT is_deleted_yn) AND\n> ((growth_window_type)::text = 'DISTRICT'::text))\n> CTE stef\n> -> Nested Loop (cost=0.00..102.58 rows=1 width=29)\n> Join Filter: ((ssef.student_id = terf.student_id) AND (ssef.grade_id =\n> terf.grade_id))\n> -> Nested Loop (cost=0.00..18.80 rows=3 width=28)\n> -> CTE Scan on terms t (cost=0.00..0.02 rows=1 width=8)\n> -> Index Scan using student_school_enrollment_fact_idx2 on\n> student_school_enrollment_fact ssef (cost=0.00..18.74 rows=3 width=20)\n> Index Cond: ((partner_id = t.partner_id) AND (term_id =\n> t.term_id))\n> Filter: primary_yn\n> -> Index Scan using test_event_result_fact_idx3 on\n> test_event_result_fact terf (cost=0.00..27.85 rows=4 width=25)\n> Index Cond: ((partner_id = t.partner_id) AND (term_id = t.term_id))\n> Filter: growth_event_yn\n> \n> The estimates in the first CTE are correct, but in the second, the scan on\n> student_school_enrollment_fact will return about 1.5 million rows, and the\n> scan on test_event_result_fact actually returns about 1.1 million. The top\n> level join should return about 900K rows. I believe the fundamental issue is\n> that the CTE stef outer nested loop should be a merge join instead, but I\n> cannot figure out why the optimizer is estimating one row when it has the\n> statistics to correctly estimate the count.\n> \n> What would cause PG to so badly estimate the row counts?\n> \n> I've already regenerated the indexes and re-analyzed the tables involved.\n> \n> What else can I do to find out why it's running so slowly?\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 Sep 2013 16:04:25 +0000", "msg_from": "Jim Garrison <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Troubleshooting query performance issues - Resolved (sort of)" } ]
[ { "msg_contents": "We have traced this to the *addition* of a two-column index. \n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Jim Garrison\n> Sent: Wednesday, September 25, 2013 8:58 AM\n> To: [email protected]\n> Subject: [PERFORM] Troubleshooting query performance issues\n> \n> I spent about a week optimizing a query in our performance-testing\n> environment, which has hardware similar to production.\n> \n> I was able to refactor the query and reduce the runtime from hours to about\n> 40 seconds, through the use of CTEs and a couple of new indexes.\n> \n> The database was rebuilt and refreshed with the very similar data from\n> production, but now the query takes hours again.\n> \n> In the query plan, it is clear that the row count estimates are WAY too low,\n> even though the statistics are up to date. Here's a sample query plan:\n> \n[snip]\n\nThe two tables in question both have single-column indexes on two foreign keys, say columns A and B. The query joins the two large tables on A and B. \n\nWith only the two indexes, the query plan does a bitmap AND on the index scan results and performance is stable.\n\nI added an index on (A,B), and this caused the planner to use the new index, but I was never able to get the query to complete. In one instance I let it run 18 hours. \n\nThe onlly difference was the addition of the index\n\nSummary:\n\n- With index on (A,B) -- query time is \"infinite\" \n\n- Without index on (A,B), relying on individual indexes and bitmap AND -- query time is about 4 minutes (as expected given the data volume)\n\nDoes this sound like a bug in the query planner?\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 Sep 2013 19:57:18 +0000", "msg_from": "Jim Garrison <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Troubleshooting query performance issues - resolved (sort of)" } ]
[ { "msg_contents": "I just sent off to this list for query help, and found the process of\ngathering all the requested info somewhat tedious. So I created a little\nBASH script to try to pull together as much of this information as\npossible.\n\nThe script reads an analyze file, and generates SQL queries to retrieve the\nfollowing information:\n\n\n - Postgres Version\n - Changes to postgresql.conf\n - Description of all tables scanned\n - Description of all Indices\n - Actual and estimated row counts for all the tables\n\n\nHopefully this may be of use to others as well. This list doesn't include\ndefinitions for any views or custom functions--I don't think they're in the\nanalyze output, but if they are please let me know.\n\nAny comments or suggestions for improvement would be most welcome. Thanks.\n\nKen\n\np.s., This script runs fine on my computer (Ubuntu 13.04), but on a Fedora\n11 machine it dies with\n\npg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'\npg_analyze_info.sh: line 57: syntax error: unexpected end of file\n\nIf anyone knows why, or encounters a similar error and fixes it, please let\nme know!\n\n\n-- \nAGENCY Software\nA data system that puts you in control\n100% Free Software\n*http://agency-software.org/*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing\nlist<[email protected]?body=subscribe>\n to\nlearn more about AGENCY or\nfollow the discussion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 29 Sep 2013 02:09:07 -0700", "msg_from": "Ken Tanzer <[email protected]>", "msg_from_op": true, "msg_subject": "BASH script for collecting analyze-related info" }, { "msg_contents": "On Sun, Sep 29, 2013 at 2:09 AM, Ken Tanzer <[email protected]> wrote:\n\n> I just sent off to this list for query help, and found the process of\n> gathering all the requested info somewhat tedious. So I created a little\n> BASH script to try to pull together as much of this information as\n> possible.\n>\n> The script reads an analyze file, and generates SQL queries to retrieve\n> the following information:\n>\n>\n> - Postgres Version\n> - Changes to postgresql.conf\n> - Description of all tables scanned\n> - Description of all Indices\n> - Actual and estimated row counts for all the tables\n>\n>\n> Hopefully this may be of use to others as well. This list doesn't include\n> definitions for any views or custom functions--I don't think they're in the\n> analyze output, but if they are please let me know.\n>\n> Any comments or suggestions for improvement would be most welcome. Thanks.\n>\n> Ken\n>\n> p.s., This script runs fine on my computer (Ubuntu 13.04), but on a\n> Fedora 11 machine it dies with\n>\n> pg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'\n> pg_analyze_info.sh: line 57: syntax error: unexpected end of file\n>\n> If anyone knows why, or encounters a similar error and fixes it, please\n> let me know!\n>\n\nIt's the blank line on line 26. Put a backslash on that line or delete it\nentirely.\n\nCraig\n\n\n>\n>\n> --\n> AGENCY Software\n> A data system that puts you in control\n> 100% Free Software\n> *http://agency-software.org/*\n> [email protected]\n> (253) 245-3801\n>\n> Subscribe to the mailing list<[email protected]?body=subscribe>\n> to\n> learn more about AGENCY or\n> follow the discussion.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nOn Sun, Sep 29, 2013 at 2:09 AM, Ken Tanzer <[email protected]> wrote:\nI just sent off to this list for query help, and found the process of gathering all the requested info somewhat tedious.  So I created a little BASH script to try to pull together as much of this information as possible.  \nThe script reads an analyze file, and generates SQL queries to retrieve the following information:Postgres VersionChanges to postgresql.confDescription of all tables scanned\nDescription of all IndicesActual and estimated row counts for all the tablesHopefully this may be of use to others as well.  This list doesn't include definitions for any views or custom functions--I don't think they're in the analyze output, but if they are please let me know.\nAny comments or suggestions for improvement would be most welcome.  Thanks.Kenp.s.,  This script runs fine on my computer (Ubuntu 13.04), but on a Fedora 11 machine it dies with \npg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'pg_analyze_info.sh: line 57: syntax error: unexpected end of file\nIf anyone knows why, or encounters a similar error and fixes it, please let me know!It's the blank line on line 26.  Put a backslash on that line or delete it entirely.\nCraig \n-- \nAGENCY Software  A data system that puts you in control100% Free Softwarehttp://agency-software.org/\[email protected](253) 245-3801Subscribe to the mailing list to\nlearn more about AGENCY orfollow the discussion.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 29 Sep 2013 09:57:15 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BASH script for collecting analyze-related info" }, { "msg_contents": ">\n> p.s., This script runs fine on my computer (Ubuntu 13.04), but on a\n>> Fedora 11 machine it dies with\n>>\n>> pg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'\n>> pg_analyze_info.sh: line 57: syntax error: unexpected end of file\n>>\n>> If anyone knows why, or encounters a similar error and fixes it, please\n>> let me know!\n>>\n>\n> It's the blank line on line 26. Put a backslash on that line or delete it\n> entirely.\n>\n> Craig\n>\n\n\nWell that made perfect sense, but after making that change I'm still\ngetting the same error.\n\n# Get tables\nTABLES=$( \\\ncat <( \\\n# Indexed tables \\\negrep -o 'Index Scan using \\b[a-zA-Z0-9_-]* on [a-zA-Z0-9_-]*' $FILE | cut\n-f 6 -d ' ' \\\n) <( \\\n# Scanned Tables \\\negrep -o 'Seq Scan on \\b[a-zA-Z0-9_-]* ' $FILE | cut -f 4 -d ' ' \\\n) | sort | uniq )\n\n\n\n\np.s.,  This script runs fine on my computer (Ubuntu 13.04), but on a Fedora 11 machine it dies with pg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'\npg_analyze_info.sh: line 57: syntax error: unexpected end of fileIf anyone knows why, or encounters a similar error and fixes it, please let me know!\nIt's the blank line on line 26.  Put a backslash on that line or delete it entirely.Craig\nWell that made perfect sense, but after making that change I'm still getting the same error.# Get tablesTABLES=$( \\\ncat <( \\# Indexed tables \\egrep -o 'Index Scan using \\b[a-zA-Z0-9_-]* on [a-zA-Z0-9_-]*' $FILE  | cut -f 6 -d ' ' \\\n) <( \\# Scanned Tables \\egrep -o 'Seq Scan on \\b[a-zA-Z0-9_-]* ' $FILE | cut -f 4 -d ' ' \\\n) | sort | uniq )", "msg_date": "Sun, 29 Sep 2013 14:24:52 -0700", "msg_from": "Ken Tanzer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BASH script for collecting analyze-related info" }, { "msg_contents": "On Sun, Sep 29, 2013 at 2:24 PM, Ken Tanzer <[email protected]> wrote:\n\n> p.s., This script runs fine on my computer (Ubuntu 13.04), but on a\n>>> Fedora 11 machine it dies with\n>>>\n>>> pg_analyze_info.sh: line 18: unexpected EOF while looking for matching\n>>> `)'\n>>> pg_analyze_info.sh: line 57: syntax error: unexpected end of file\n>>>\n>>> If anyone knows why, or encounters a similar error and fixes it, please\n>>> let me know!\n>>>\n>>\n>> It's the blank line on line 26. Put a backslash on that line or delete\n>> it entirely.\n>>\n>> Craig\n>>\n>\n>\n> Well that made perfect sense, but after making that change I'm still\n> getting the same error.\n>\n\nTry putting the entire TABLE= on one big line and see if that works. If it\ndoes, work backwards from there, inserting backslash-newlines. It may be\nsomething simple like an extra space after a backslash.\n\nCraig\n\nOn Sun, Sep 29, 2013 at 2:24 PM, Ken Tanzer <[email protected]> wrote:\n\n\n\n\np.s.,  This script runs fine on my computer (Ubuntu 13.04), but on a Fedora 11 machine it dies with pg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'\npg_analyze_info.sh: line 57: syntax error: unexpected end of fileIf anyone knows why, or encounters a similar error and fixes it, please let me know!\nIt's the blank line on line 26.  Put a backslash on that line or delete it entirely.Craig\nWell that made perfect sense, but after making that change I'm still getting the same error.Try putting the entire TABLE= on one big line and see if that works.  If it does, work backwards from there, inserting backslash-newlines.  It may be something simple like an extra space after a backslash.\nCraig", "msg_date": "Sun, 29 Sep 2013 16:59:05 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BASH script for collecting analyze-related info" }, { "msg_contents": "On nie, wrz 29, 2013 at 02:09:07 -0700, Ken Tanzer wrote:\n> p.s., This script runs fine on my computer (Ubuntu 13.04), but on a Fedora\n> 11 machine it dies with\n> \n> pg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'\n> pg_analyze_info.sh: line 57: syntax error: unexpected end of file\n> \n> If anyone knows why, or encounters a similar error and fixes it, please let\n> me know!\n\nFixed by changing it to:\n\n#v+\n# Get tables\nTABLES=$(\ncat <(\n# Indexed tables\negrep -o 'Index Scan using \\b[a-zA-Z0-9_-]* on [a-zA-Z0-9_-]*' $FILE | cut -f 6 -d ' '\n) <(\n# Scanned Tables\negrep -o 'Seq Scan on \\b[a-zA-Z0-9_-]* ' $FILE | cut -f 4 -d ' '\n) | sort | uniq\n\n)\n#v-\n\nThat is - I removed the \"\\\" at the end - it's of no use.\n\nThere are couple of issues/questions though:\n1. instead of: \"SELECT 'Postgres Version';\" it's better to use \\echo Postgres Version\n2. why is there union with nulls in the last query?\n3. When extracting table names you are missing:\n a. Index Scan Backward\n b. Bitmap Heap Scan\n4. When extracting index names, you're missing Index Only Scans and Index Scan\n Backwards.\n5. The whole script will fail if you're using table names with spaces (not that\n I think this is sane, but the script should recognize it)\n6. It's generally better to use\n if [[ ...\n than\n if [ ...\n reason - [[ is internal for bash, while [ is fork to external program\n7. instead of | sort | uniq, it's better to use sort -u\n8. usage of all-upper-case variables in bash is (by some, more\n bash-skilled people, like on #bash on irc.freenode) frowned upon.\n all-uppercase is supposed to be for environment variables only.\n\nAll in all - looks pretty good.\n\ndepesz\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 Sep 2013 09:05:11 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BASH script for collecting analyze-related info" }, { "msg_contents": "Thanks for the suggestions, help and feedback. New version attached.\n\n\n3. When extracting table names you are missing:\n> a. Index Scan Backward\n> b. Bitmap Heap Scan\n> 4. When extracting index names, you're missing Index Only Scans and Index\n> Scan Backwards.\n\n\nIf someone can send me analyze output with these characteristics, I'll try\nto get the script to pick them up.\n\n\n5. The whole script will fail if you're using table names with spaces (not\n> that\n> I think this is sane, but the script should recognize it)\n\n\nUggh yes. I imagine it will fail on international characters as well. Not\nsure if I want to tackle that right now, though suggestions welcome. (Or\nif someone else wants to do it!) I did tweak so that quoted identifiers\nwill work, e.g. mixed case field names, and also ones with the $ sign. I\ncompletely understand people using other languages, but really do people\nneed spaces in their names? :)\n\n\n2. why is there union with nulls in the last query?\n\n\nLaziness, convenience or expediency, pick your preferred label. I needed\nsomething to go with the last \"UNION\" that was generated. I changed it to\ndo this more cleanly.\n\n\n1. instead of: \"SELECT 'Postgres Version';\" it's better to use \\echo\n> Postgres Version\n\n\nMuch better. I used qecho so it can be redirected with the rest of the\noutput.\n\n\nThat is - I removed the \"\\\" at the end - it's of no use.\n>\n\nGreat. I ended up taking them all out.\n\n\n\n> 6. It's generally better to use\n> if [[ ...\n> than\n> if [ ...\n> reason - [[ is internal for bash, while [ is fork to external program\n> 7. instead of | sort | uniq, it's better to use sort -u\n>\n\nCheck and check, did both of these\n\n\n\n> 8. usage of all-upper-case variables in bash is (by some, more\n> bash-skilled people, like on #bash on irc.freenode) frowned upon.\n> all-uppercase is supposed to be for environment variables only.\n>\n\nPersonally I like the upper case names as they stand out easily in the\nscript. But I'd hate to be frowned on by the bashers (or bashed by the\nfrowners), so I changed them to lower case.\n\n\n\n> All in all - looks pretty good.\n>\n>\nThanks!\n\nCheers,\nKen\n\n-- \nAGENCY Software\nA data system that puts you in control\n100% Free Software\n*http://agency-software.org/*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing\nlist<[email protected]?body=subscribe>\n to\nlearn more about AGENCY or\nfollow the discussion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 30 Sep 2013 19:40:55 -0700", "msg_from": "Ken Tanzer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BASH script for collecting analyze-related info" }, { "msg_contents": "On Wed, Oct 9, 2013 at 10:39 AM, John Melesky\n<[email protected]>wrote:\n\n> (off-list)\n>\n(on-list)\n\n\n> I doubt I'm the first to ask, but have you considered putting this up on\n> github (or similar) so others can contribute more easily and keep it\n> up-to-date?\n>\n> If that's not in the cards, are you opposed to someone else putting it up\n> on github and taking over management of it?\n>\n> -john\n>\n>\nActually, you definitely were the first to ask. I'm not opposed to any\ncombination of putting this on github, continuing to work on this script,\nor having someone else do it. But before doing so (or to start), let me\nthrow out a few questions:\n\nFirst and foremost (and primarily directed to people who are kind enough to\nprovide help on this list), is a script like this worthwhile? Will it help\nget better problem reports and save back-and-forth time of \"please post x,\ny and z?\" If not, I don't see the point of pursuing this.\n\nIf it is worthwhile, what's the best way of going about getting the\nnecessary information? Parsing the contents of EXPLAIN output that wasn't\ndesigned for this purpose seems doable, but clunky at best. And it doesn't\ntell you which views are involved.\n\nIn an ideal situation, I'd see this being built into postgres, so that you\ncould do something like EXPLAIN [ANALYZE] *DESCRIBE ...*, and get your\ndescriptions directly as part of the output. If that is pure fantasy, is\nthere any way to directly identify all the tables, views and indexes that\nare involved in a query plan?\n\nIf no to the above, then parsing the analyze output seems the only option.\n I was pondering what language a script to do this should be written in,\nand I'm thinking that writing it in pgpsql might be the best option, since\nit would be the most portable and presumably available on all systems. I\nhaven't fully thought that through, so I'm wondering if anyone sees reasons\nthat wouldn't work, or if some other language would be a better or more\nnatural choice.\n\nCheers,\nKen\n\n\n-- \nAGENCY Software\nA data system that puts you in control\n100% Free Software\n*http://agency-software.org/*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing\nlist<[email protected]?body=subscribe>\n to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Wed, Oct 9, 2013 at 10:39 AM, John Melesky <[email protected]> wrote:\n(off-list)(on-list) \nI doubt I'm the first to ask, but have you considered putting this up on github (or similar) so others can contribute more easily and keep it up-to-date?\nIf that's not in the cards, are you opposed to someone else putting it up on github and taking over management of it?-johnActually, you definitely were the first to ask.  I'm not opposed to any combination of putting this on github, continuing to work on this script, or having someone else do it.  But before doing so (or to start), let me throw out a few questions:\nFirst and foremost (and primarily directed to people who are kind enough to provide help on this list), is a script like this worthwhile?  Will it help get better problem reports and save back-and-forth time of \"please post x, y and z?\"  If not, I don't see the point of pursuing this.\nIf it is worthwhile, what's the best way of going about getting the necessary information?  Parsing the contents of EXPLAIN output that wasn't designed for this purpose seems doable, but clunky at best.  And it doesn't tell you which views are involved.\nIn an ideal situation, I'd see this being built into postgres, so that you could do something like EXPLAIN [ANALYZE] DESCRIBE ..., and get your descriptions directly as part of the output.  If that is pure fantasy, is there any way to directly identify all the tables, views and indexes that are involved in a query plan?\nIf no to the above, then parsing the analyze output seems the only option.  I was pondering what language a script to do this should be written in, and I'm thinking that writing it in pgpsql might be the best option, since it would be the most portable and presumably available on all systems.  I haven't fully thought that through, so I'm wondering if anyone sees reasons that wouldn't work, or if some other language would be a better or more natural choice.\nCheers,Ken -- AGENCY Software  A data system that puts you in control\n100% Free Softwarehttp://agency-software.org/[email protected]\n(253) 245-3801Subscribe to the mailing list tolearn more about AGENCY or\nfollow the discussion.", "msg_date": "Wed, 16 Oct 2013 00:20:09 -0700", "msg_from": "Ken Tanzer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BASH script for collecting analyze-related info" }, { "msg_contents": "Hey John, and thanks for the input.\n\nOn Wed, Oct 16, 2013 at 11:00 AM, John Melesky <[email protected]\n> wrote:\n\n> On Wed, Oct 16, 2013 at 12:20 AM, Ken Tanzer <[email protected]> wrote:\n>\n>> First and foremost (and primarily directed to people who are kind enough\n>> to provide help on this list), is a script like this worthwhile? Will it\n>> help get better problem reports and save back-and-forth time of \"please\n>> post x, y and z?\" If not, I don't see the point of pursuing this.\n>>\n>\n> Given the amount of back-and-forth that's already happened in this thread,\n> it looks like the script meets at least some needs.\n>\n\nAgreed. It definitely met my immediate need. But I'm actually seeing the\nlack of responses from \"list helpers\" as an indication this might not be of\nmuch general interest. Of course that's ok too. :)\n\n\n(Quoting myself.) In an ideal situation, I'd see this being built into\n> postgres, so that you could do something like EXPLAIN [ANALYZE] *DESCRIBE\n> ...*, and get your descriptions directly as part of the output. If that\n> is pure fantasy, is there any way to directly identify all the tables,\n> views and indexes that are involved in a query plan?\n\n\nThis approach seems like the only way you could really get from half-assed\nto Ass 1.0. I'll just keep it in the pipe dream category unless or until\nsomeone else says otherwise. A script it is, with a possible goal of\ngetting to Ass 0.7.\n\n\n>\n>\n>> If it is worthwhile, what's the best way of going about getting the\n>> necessary information? Parsing the contents of EXPLAIN output that wasn't\n>> designed for this purpose seems doable, but clunky at best. And it doesn't\n>> tell you which views are involved.\n>>\n>\n> Well, EXPLAIN can actually output in a couple of more machine-readable\n> formats (XML, JSON, and YAML), which would be a good place to start. I'm\n> not sure what the best way to get view information would be (aside from\n> potentially parsing the query itself), but I'm sure someone has an idea.\n>\n>> If that is pure fantasy, is there any way to directly identify all the\n>> tables, views and indexes that are involved in a query plan?\n>>\n>\n> Tables and indexes you can get from explaining with a machine-readable\n> format, then traversing the tree looking for 'Relation Name' or 'Index\n> Name' attributes.\n>\n\nI'm not seeing how these other output formats would help much. They don't\nseem to contain additional information (and still not the views). Plus,\nI'd think it would be best to make sure the describes are based on the same\nexplain. Since the \"regular\"? format seems to be what is submitted to the\nmailing list, it seems best to just stick with parsing that.\n\n\n> If no to the above, then parsing the analyze output seems the only\n>> option. I was pondering what language a script to do this should be\n>> written in, and I'm thinking that writing it in pgpsql might be the best\n>> option, since it would be the most portable and presumably available on all\n>> systems. I haven't fully thought that through, so I'm wondering if anyone\n>> sees reasons that wouldn't work, or if some other language would be a\n>> better or more natural choice.\n>>\n>\n> I'm a proponent of just taking what you have and publishing it. That lets\n> people use it who need it now, and makes it easier for others to improve\n> it.\n>\n\nI'm not really sure how putting this in on github and linking to it here\nmakes it any easier to access than the version attached in this thread, but\nhere goes. https://github.com/ktanzer/pg_analyze_info\n\nIf there's a need for a cross-platform version, that's more likely to\n> happen if people who use that platform can test that script for you.\n\n\nI was thinking that a pgpsql version would be inherently more\ncross-platform, and definitely more available. A bash script seems to cut\nout at least most of the Windows users.\n\n\n> Notably, publishing what you have now won't in any way prevent you from\n> rewriting it in a plpgsql function and packaging that up on pgxn later.\n>\n> Yup.\n\nCheers,\nKen\n\n\n-- \nAGENCY Software\nA data system that puts you in control\n100% Free Software\n*http://agency-software.org/*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing\nlist<[email protected]?body=subscribe>\n to\nlearn more about AGENCY or\nfollow the discussion.\n\nHey John, and thanks for the input.On Wed, Oct 16, 2013 at 11:00 AM, John Melesky <[email protected]> wrote:\n\nOn Wed, Oct 16, 2013 at 12:20 AM, Ken Tanzer <[email protected]> wrote:\n\n\nFirst and foremost (and primarily directed to people who are kind enough to provide help on this list), is a script like this worthwhile?  Will it help get better problem reports and save back-and-forth time of \"please post x, y and z?\"  If not, I don't see the point of pursuing this.\nGiven the amount of back-and-forth that's already happened in this thread, it looks like the script meets at least some needs.\nAgreed.  It definitely met my immediate need.  But I'm actually seeing the lack of responses from \"list helpers\" as an indication this might not be of much general interest.  Of course that's ok too. :)\n(Quoting myself.)  In an ideal situation, I'd see this being built into postgres, so that you could do something like EXPLAIN [ANALYZE] DESCRIBE ..., and get your descriptions directly as part of the output.  If that is pure fantasy, is there any way to directly identify all the tables, views and indexes that are involved in a query plan?\nThis approach seems like the only way you could really get from half-assed to Ass 1.0.  I'll just keep it in the pipe dream category unless or until someone else says otherwise.  A script it is, with a possible goal of getting to Ass 0.7.\n \n \nIf it is worthwhile, what's the best way of going about getting the necessary information?  Parsing the contents of EXPLAIN output that wasn't designed for this purpose seems doable, but clunky at best.  And it doesn't tell you which views are involved.\nWell, EXPLAIN can actually output in a couple of more machine-readable formats (XML, JSON, and YAML), which would be a good place to start. I'm not sure what the best way to get view information would be (aside from potentially parsing the query itself), but I'm sure someone has an idea.\n\nIf that is pure fantasy, is there any way to directly identify all the tables, views and indexes that are involved in a query plan? \nTables and indexes you can get from explaining with a machine-readable format, then traversing the tree looking for 'Relation Name' or 'Index Name' attributes.\nI'm not seeing how these other output formats would help much.  They don't seem to contain additional information (and still not the views).  Plus, I'd think it would be best to make sure the describes are based on the same explain.  Since the \"regular\"? format seems to be what is submitted to the mailing list, it seems best to just stick with parsing that.\n \n\nIf no to the above, then parsing the analyze output seems the only option.  I was pondering what language a script to do this should be written in, and I'm thinking that writing it in pgpsql might be the best option, since it would be the most portable and presumably available on all systems.  I haven't fully thought that through, so I'm wondering if anyone sees reasons that wouldn't work, or if some other language would be a better or more natural choice.\nI'm a proponent of just taking what you have and publishing it. That lets people use it who need it now, and makes it easier for others to improve it. \nI'm not really sure how putting this in on github and linking to it here makes it any easier to access than the version attached in this thread, but here goes.  https://github.com/ktanzer/pg_analyze_info\nIf there's a need for a cross-platform version, that's more likely to happen if people who use that platform can test that script for you.\nI was thinking that a pgpsql version would be inherently more cross-platform, and definitely more available.  A bash script seems to cut out at least most of the Windows users. \nNotably, publishing what you have now won't in any way prevent you from rewriting it in a plpgsql function and packaging that up on pgxn later.\nYup.Cheers,Ken -- \nAGENCY Software  A data system that puts you in control100% Free Softwarehttp://agency-software.org/\[email protected](253) 245-3801Subscribe to the mailing list to\nlearn more about AGENCY orfollow the discussion.", "msg_date": "Wed, 16 Oct 2013 21:50:57 -0700", "msg_from": "Ken Tanzer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BASH script for collecting analyze-related info" } ]
[ { "msg_contents": "I am trying to understand the heap_blks_read and heap_blks_hit of\npg_statio_all_tables in 9.2\nDo the numbers refer only to SELECT, or they take INSERT into account?\nWould a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of over\n55% combined with a heap_blks_read value of over 50M indicate an issue with\nthe queries affecting that table, or it is normal if the table is heavily\nwritten to?\nThanks\n\nI am trying to understand the heap_blks_read and heap_blks_hit of pg_statio_all_tables in 9.2Do the numbers refer only to SELECT, or they take INSERT into account?Would a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of over 55% combined with a heap_blks_read value of over 50M indicate an issue with the queries affecting that table, or it is normal if the table is heavily written to?\nThanks", "msg_date": "Mon, 30 Sep 2013 15:45:46 +0300", "msg_from": "Xenofon Papadopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "pg_statio_all_tables columns" }, { "msg_contents": "Hi Xenofon,\n\nIl 30/09/2013 14:45, Xenofon Papadopoulos ha scritto:\n> I am trying to understand the heap_blks_read and heap_blks_hit of \n> pg_statio_all_tables in 9.2\n> Do the numbers refer only to SELECT, or they take INSERT into account?\n\nheap_blks_read and heap_blks_hit refer to number of blocks read from \ndisk layer and from page cache respectively during table usage, \nindependently if insert, select, delete, update operations are involved.\n\n> Would a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of \n> over 55% combined with a heap_blks_read value of over 50M indicate an \n> issue with the queries affecting that table, or it is normal if the \n> table is heavily written to?\n\nHigh values of this ratio mean you have a well-cached database, since \ndisk blocks reads slow down database operations. You can performe it \nincreasing the cache available to your database.\n\n\nGiuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 Sep 2013 16:40:49 +0200", "msg_from": "Giuseppe Broccolo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_statio_all_tables columns" }, { "msg_contents": "Hello Giuseppe,\ndo you actually mean I have a *poorly-cached *database?\nShould this ratio stay low even in the case of a write-heavy table?\n\n\n\nOn Mon, Sep 30, 2013 at 5:40 PM, Giuseppe Broccolo <\[email protected]> wrote:\n\n> Hi Xenofon,\n>\n> Il 30/09/2013 14:45, Xenofon Papadopoulos ha scritto:\n>\n> I am trying to understand the heap_blks_read and heap_blks_hit of\n>> pg_statio_all_tables in 9.2\n>> Do the numbers refer only to SELECT, or they take INSERT into account?\n>>\n>\n> heap_blks_read and heap_blks_hit refer to number of blocks read from disk\n> layer and from page cache respectively during table usage, independently if\n> insert, select, delete, update operations are involved.\n>\n>\n> Would a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of\n>> over 55% combined with a heap_blks_read value of over 50M indicate an issue\n>> with the queries affecting that table, or it is normal if the table is\n>> heavily written to?\n>>\n>\n> High values of this ratio mean you have a well-cached database, since disk\n> blocks reads slow down database operations. You can performe it increasing\n> the cache available to your database.\n>\n>\n> Giuseppe.\n>\n> --\n> Giuseppe Broccolo - 2ndQuadrant Italy\n> PostgreSQL Training, Services and Support\n> giuseppe.broccolo@2ndQuadrant.**it | www.2ndQuadrant.it\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nHello Giuseppe,do you actually mean I have a poorly-cached database?Should this ratio stay low even in the case of a write-heavy table?\nOn Mon, Sep 30, 2013 at 5:40 PM, Giuseppe Broccolo <[email protected]> wrote:\nHi Xenofon,\n\nIl 30/09/2013 14:45, Xenofon Papadopoulos ha scritto:\n\nI am trying to understand the heap_blks_read and heap_blks_hit of pg_statio_all_tables in 9.2\nDo the numbers refer only to SELECT, or they take INSERT into account?\n\n\nheap_blks_read and heap_blks_hit refer to number of blocks read from disk layer and from page cache respectively during table usage, independently if insert, select, delete, update operations are involved.\n\n\n\nWould a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of over 55% combined with a heap_blks_read value of over 50M indicate an issue with the queries affecting that table, or it is normal if the table is heavily written to?\n\n\nHigh values of this ratio mean you have a well-cached database, since disk blocks reads slow down database operations. You can performe it increasing the cache available to your database.\n\n\nGiuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 30 Sep 2013 17:41:44 +0300", "msg_from": "Xenofon Papadopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_statio_all_tables columns" }, { "msg_contents": "Il 30/09/2013 16:41, Xenofon Papadopoulos ha scritto:\n> Should this ratio stay low even in the case of a write-heavy table?\n\nYes, in my opinion. Before data manipolation, database pages are moved \non the shared buffer. heap_blks_read and heap_blks_hit are involved in \nthose operations, not directly in data persistance on hard driver.\n\nGiuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 Sep 2013 18:30:43 +0200", "msg_from": "Giuseppe Broccolo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_statio_all_tables columns" }, { "msg_contents": "On Mon, Sep 30, 2013 at 5:45 AM, Xenofon Papadopoulos <[email protected]>wrote:\n\n> I am trying to understand the heap_blks_read and heap_blks_hit of\n> pg_statio_all_tables in 9.2\n> Do the numbers refer only to SELECT, or they take INSERT into account?\n>\n\nThey take insert (and update, and delete) into account.\n\n\n> Would a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of over\n> 55% combined with a heap_blks_read value of over 50M indicate an issue with\n> the queries affecting that table, or it is normal if the table is heavily\n> written to?\n>\n\nThere is really no answer to that. For one thing, some unknown number of\nthose heap_blks_read are really coming from the OS/FS's page cache, not\nfrom disk. For another thing, we don't know how many queries, of what\nkind, on how large of a table, those 50M reads are supporting.\n\nDo you have a performance problem? If so, is it due to IO bottleneck? If\nso, high heap_blks_read on a certain table might indicate where the problem\ncould be (although pg_stat_statements would probably do a better job).\n\nIn the absence of a specific problem to be diagnosed, those numbers don't\nmean very much.\n\n\nCheers,\n\nJeff\n\nOn Mon, Sep 30, 2013 at 5:45 AM, Xenofon Papadopoulos <[email protected]> wrote:\nI am trying to understand the heap_blks_read and heap_blks_hit of pg_statio_all_tables in 9.2Do the numbers refer only to SELECT, or they take INSERT into account?\nThey take insert (and update, and delete) into account. \nWould a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of over 55% combined with a heap_blks_read value of over 50M indicate an issue with the queries affecting that table, or it is normal if the table is heavily written to?\nThere is really no answer to that.  For one thing, some unknown number of those heap_blks_read are really coming from the OS/FS's page cache, not from disk.   For another thing, we don't know how many queries, of what kind, on how large of a table, those 50M reads are supporting.\nDo you have a performance problem?  If so, is it due to IO bottleneck?  If so, high heap_blks_read on a certain table might indicate where the problem could be (although pg_stat_statements would probably do a better job).\nIn the absence of a specific problem to be diagnosed, those numbers don't mean very much.Cheers,Jeff", "msg_date": "Mon, 30 Sep 2013 10:44:46 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_statio_all_tables columns" }, { "msg_contents": "I do have a performance problem, and it is due to I/O bottleneck.\nWe don't have pg_stat_statements installed, we will check it out.\nThanks\n\n\n\nOn Mon, Sep 30, 2013 at 8:44 PM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Sep 30, 2013 at 5:45 AM, Xenofon Papadopoulos <[email protected]>wrote:\n>\n>> I am trying to understand the heap_blks_read and heap_blks_hit of\n>> pg_statio_all_tables in 9.2\n>> Do the numbers refer only to SELECT, or they take INSERT into account?\n>>\n>\n> They take insert (and update, and delete) into account.\n>\n>\n>> Would a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of\n>> over 55% combined with a heap_blks_read value of over 50M indicate an issue\n>> with the queries affecting that table, or it is normal if the table is\n>> heavily written to?\n>>\n>\n> There is really no answer to that. For one thing, some unknown number of\n> those heap_blks_read are really coming from the OS/FS's page cache, not\n> from disk. For another thing, we don't know how many queries, of what\n> kind, on how large of a table, those 50M reads are supporting.\n>\n> Do you have a performance problem? If so, is it due to IO bottleneck? If\n> so, high heap_blks_read on a certain table might indicate where the problem\n> could be (although pg_stat_statements would probably do a better job).\n>\n> In the absence of a specific problem to be diagnosed, those numbers don't\n> mean very much.\n>\n>\n> Cheers,\n>\n> Jeff\n>\n\nI do have a performance problem, and it is due to I/O bottleneck.We don't have pg_stat_statements installed, we will check it out.Thanks\nOn Mon, Sep 30, 2013 at 8:44 PM, Jeff Janes <[email protected]> wrote:\nOn Mon, Sep 30, 2013 at 5:45 AM, Xenofon Papadopoulos <[email protected]> wrote:\n\nI am trying to understand the heap_blks_read and heap_blks_hit of pg_statio_all_tables in 9.2Do the numbers refer only to SELECT, or they take INSERT into account?\nThey take insert (and update, and delete) into account. \n\nWould a heap_blks_read / ( heap_blks_read + heap_blks_hit ) ration of over 55% combined with a heap_blks_read value of over 50M indicate an issue with the queries affecting that table, or it is normal if the table is heavily written to?\nThere is really no answer to that.  For one thing, some unknown number of those heap_blks_read are really coming from the OS/FS's page cache, not from disk.   For another thing, we don't know how many queries, of what kind, on how large of a table, those 50M reads are supporting.\nDo you have a performance problem?  If so, is it due to IO bottleneck?  If so, high heap_blks_read on a certain table might indicate where the problem could be (although pg_stat_statements would probably do a better job).\nIn the absence of a specific problem to be diagnosed, those numbers don't mean very much.Cheers,Jeff", "msg_date": "Mon, 30 Sep 2013 20:50:17 +0300", "msg_from": "Xenofon Papadopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_statio_all_tables columns" } ]
[ { "msg_contents": "I need a help on postgresql performance\n\nI have configurate my postgresql files for tunning my server, however it is\nslow and cpu resources are highter than 120%\n\nI have no idea on how to solve this issue, I was trying to search more\ninfor on google but is not enough, I also have try autovacum sentences and\nreindex db, but it continues beeing slow\n\nMy app is a gps listener that insert more than 6000 records per minutes\nusing a tcp server developed on python twisted, where there is no problems,\nthe problem is when I try to follow the gps devices on a map on a relatime,\nI am doing queries each 6 seconds to my database from my django app, for\nrequest last position using a stored procedure, but the query get slow on\nmore than 50 devices and cpu start to using more than 120% of its resources\n\nDjango App connect the postgres database directly, and tcp listener server\nfor teh devices connect database on threaded way using pgbouncer, I have\nnot using my django web app on pgbouncer caause I dont want to crash gps\ndevices connection on the pgbouncer\n\nI hoe you could help on get a better performance\n\nI am attaching my store procedure, my conf files and my cpu, memory\ninformation\n\n**Stored procedure**\n\n CREATE OR REPLACE FUNCTION gps_get_live_location (\n _imeis varchar(8)\n )\n RETURNS TABLE (\n imei varchar,\n device_id integer,\n date_time_process timestamp with time zone,\n latitude double precision,\n longitude double precision,\n course smallint,\n speed smallint,\n mileage integer,\n gps_signal smallint,\n gsm_signal smallint,\n alarm_status boolean,\n gsm_status boolean,\n vehicle_status boolean,\n alarm_over_speed boolean,\n other text,\n address varchar\n ) AS $func$\n DECLARE\n arr varchar[];\n BEGIN\n arr := regexp_split_to_array(_imeis, E'\\\\s+');\n FOR i IN 1..array_length(arr, 1) LOOP\n RETURN QUERY\n SELECT\n gpstracking_device_tracks.imei,\n gpstracking_device_tracks.device_id,\n gpstracking_device_tracks.date_time_process,\n gpstracking_device_tracks.latitude,\n gpstracking_device_tracks.longitude,\n gpstracking_device_tracks.course,\n gpstracking_device_tracks.speed,\n gpstracking_device_tracks.mileage,\n gpstracking_device_tracks.gps_signal,\n gpstracking_device_tracks.gsm_signal,\n gpstracking_device_tracks.alarm_status,\n gpstracking_device_tracks.gps_status,\n gpstracking_device_tracks.vehicle_status,\n gpstracking_device_tracks.alarm_over_speed,\n gpstracking_device_tracks.other,\n gpstracking_device_tracks.address\n FROM gpstracking_device_tracks\n WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\nnow())\n AND gpstracking_device_tracks.date_time_process <= NOW()\n ORDER BY gpstracking_device_tracks.date_time_process DESC\n LIMIT 1;\n END LOOP;\n RETURN;\n END;\n $func$\n LANGUAGE plpgsql VOLATILE SECURITY DEFINER;\n\n**$ cat less /etc/sysctl.conf**\n\n kernel.shmmax = 6871947673\n kernel.shmall = 6871947673\n fs.file-max = 4194304\n\n**$ cat /etc/postgresql/9.1/main/postgresql.conf**\n\n data_directory = '/var/lib/postgresql/9.1/main' # use data in\nanother directory\n hba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based\nauthentication file\n ident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident\nconfiguration file\n external_pid_file = '/var/run/postgresql/9.1-main.pid' # write\nan extra PID file\n listen_addresses = 'localhost' # what IP address(es) to listen\non;\n port = 5432 # (change requires restart)\n max_connections = 80 # (change requires restart)\n superuser_reserved_connections = 3 # (change requires restart)\n unix_socket_directory = '/var/run/postgresql' # (change\nrequires restart)\n #unix_socket_group = '' # (change requires restart)\n #unix_socket_permissions = 0777 # begin with 0 to use octal\nnotation\n #bonjour = off # advertise server via Bonjour\n #bonjour_name = '' # defaults to the computer name\n ssl = true # (change requires restart)\n #ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL\nciphers\n #ssl_renegotiation_limit = 512MB # amount of data between\nrenegotiations\n #password_encryption = on\n #db_user_namespace = off\n #krb_server_keyfile = ''\n #krb_srvname = 'postgres' # (Kerberos only)\n #krb_caseins_users = off\n #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n #tcp_keepalives_count = 0 # TCP_KEEPCNT;\n # shared_buffers = 4096MB # min 128kB\n temp_buffers = 16MB # min 800kB\n # work_mem = 80MB # min 64kB\n # maintenance_work_mem = 2048MB # min 1MB\n max_stack_depth = 4MB # min 100kB\n #max_files_per_process = 1000 # min 25\n #vacuum_cost_delay = 0ms # 0-100 milliseconds\n #vacuum_cost_page_hit = 1 # 0-10000 credits\n #vacuum_cost_page_miss = 10 # 0-10000 credits\n #vacuum_cost_page_dirty = 20 # 0-10000 credits\n #vacuum_cost_limit = 200 # 1-10000 credits\n #bgwriter_delay = 200ms # 10-10000ms between rounds\n #bgwriter_lru_maxpages = 100 # 0-1000 max buffers\nwritten/round\n #bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\nscanned/round\n #effective_io_concurrency = 1 # 1-1000. 0 disables prefetching\n #wal_level = minimal # minimal, archive, or\nhot_standby\n #fsync = on # turns forced synchronization\non or off\n #synchronous_commit = on # synchronization level; on,\noff, or local\n #wal_sync_method = fsync # the default is the first\noption\n #full_page_writes = on # recover from partial page\nwrites\n #wal_buffers = -1 # min 32kB, -1 sets based on\nshared_buffers\n #wal_writer_delay = 200ms # 1-10000 milliseconds\n #commit_delay = 0 # range 0-100000, in\nmicroseconds\n #commit_siblings = 5 # range 1-1000\n # checkpoint_segments = 64 # in logfile segments, min 1,\n16MB each\n checkpoint_timeout = 5min # range 30s-1h\n # checkpoint_completion_target = 0.5 # checkpoint target duration,\n0.0 - 1.0\n #checkpoint_warning = 30s # 0 disables\n #archive_mode = off # allows archiving to be done\n #archive_command = '' # command to use to archive a logfile\nsegment\n #archive_timeout = 0 # force a logfile segment switch after\nthis\n #max_wal_senders = 0 # max number of walsender processes\n #wal_sender_delay = 1s # walsender cycle time, 1-10000\nmilliseconds\n #wal_keep_segments = 0 # in logfile segments, 16MB each; 0\ndisables\n #vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is\ndelayed\n #replication_timeout = 60s # in milliseconds; 0 disables\n #synchronous_standby_names = '' # standby servers that provide sync rep\n #hot_standby = off # \"on\" allows queries during\nrecovery\n #max_standby_archive_delay = 30s # max delay before canceling\nqueries\n #max_standby_streaming_delay = 30s # max delay before canceling\nqueries\n #wal_receiver_status_interval = 10s # send replies at least this\noften\n #hot_standby_feedback = off # send info from standby to\nprevent\n #enable_bitmapscan = on\n #enable_hashagg = on\n #enable_hashjoin = on\n #enable_indexscan = on\n #enable_material = on\n #enable_mergejoin = on\n #enable_nestloop = on\n #enable_seqscan = on\n #enable_sort = on\n #enable_tidscan = on\n #seq_page_cost = 1.0 # measured on an arbitrary scale\n #random_page_cost = 4.0 # same scale as above\n cpu_tuple_cost = 0.01 # same scale as above\n cpu_index_tuple_cost = 0.005 # same scale as above\n #cpu_operator_cost = 0.0025 # same scale as above\n # effective_cache_size = 8192MB\n #geqo = on\n #geqo_threshold = 12\n #geqo_effort = 5 # range 1-10\n #geqo_pool_size = 0 # selects default based on\neffort\n #geqo_generations = 0 # selects default based on\neffort\n #geqo_selection_bias = 2.0 # range 1.5-2.0\n #geqo_seed = 0.0 # range 0.0-1.0\n #default_statistics_target = 100 # range 1-10000\n #constraint_exclusion = partition # on, off, or partition\n #cursor_tuple_fraction = 0.1 # range 0.0-1.0\n #from_collapse_limit = 8\n #join_collapse_limit = 8 # 1 disables collapsing of\nexplicit\n #log_destination = 'stderr' # Valid values are combinations\nof\n #logging_collector = off # Enable capturing of stderr\nand csvlog\n # These are only used if logging_collector is on:\n #log_directory = 'pg_log' # directory where log files are\nwritten,\n #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name\npattern,\n #log_file_mode = 0600 # creation mode for log files,\n #log_truncate_on_rotation = off # If on, an existing log file\nwith the\n #log_rotation_age = 1d # Automatic rotation of\nlogfiles will\n #log_rotation_size = 10MB # Automatic rotation of\nlogfiles will\n #syslog_facility = 'LOCAL0'\n #syslog_ident = 'postgres'\n #silent_mode = off # Run server silently.\n #client_min_messages = notice # values in order of decreasing\ndetail:\n #log_min_messages = warning # values in order of decreasing\ndetail:\n #log_min_error_statement = error # values in order of decreasing\ndetail:\n #log_min_duration_statement = -1 # -1 is disabled, 0 logs all\nstatements\n #debug_print_parse = off\n #debug_print_rewritten = off\n #debug_print_plan = off\n #debug_pretty_print = on\n #log_checkpoints = off\n #log_connections = off\n #log_disconnections = off\n #log_duration = off\n #log_error_verbosity = default # terse, default, or verbose\nmessages\n #log_hostname = off\n log_line_prefix = '%t ' # special values:\n #log_lock_waits = off # log lock waits >=\ndeadlock_timeout\n #log_statement = 'none' # none, ddl, mod, all\n #log_temp_files = -1 # log temporary files equal or\nlarger\n #log_timezone = '(defaults to server environment setting)'\n #track_activities = on\n #track_counts = on\n #track_functions = none # none, pl, all\n #track_activity_query_size = 1024 # (change requires restart)\n #update_process_title = on\n #stats_temp_directory = 'pg_stat_tmp'\n #log_parser_stats = off\n #log_planner_stats = off\n #log_executor_stats = off\n #log_statement_stats = off\n #autovacuum = on # Enable autovacuum subprocess?\n 'on'\n #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all\nactions and\n #autovacuum_max_workers = 3 # max number of autovacuum\nsubprocesses\n #autovacuum_naptime = 1min # time between autovacuum runs\n #autovacuum_vacuum_threshold = 50 # min number of row updates\nbefore\n #autovacuum_analyze_threshold = 50 # min number of row updates\nbefore\n #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\nvacuum\n #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before\nanalyze\n #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced\nvacuum\n #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n #search_path = '\"$user\",public' # schema names\n #default_tablespace = '' # a tablespace name, '' uses\nthe default\n #temp_tablespaces = '' # a list of tablespace names,\n'' uses\n #check_function_bodies = on\n #default_transaction_isolation = 'read committed'\n #default_transaction_read_only = off\n #default_transaction_deferrable = off\n #session_replication_role = 'origin'\n #statement_timeout = 0 # in milliseconds, 0 is disabled\n #vacuum_freeze_min_age = 50000000\n #vacuum_freeze_table_age = 150000000\n #bytea_output = 'hex' # hex, escape\n #xmlbinary = 'base64'\n #xmloption = 'content'\n datestyle = 'iso, mdy'\n #intervalstyle = 'postgres'\n #timezone = '(defaults to server environment setting)'\n #timezone_abbreviations = 'Default' # Select the set of available\ntime zone\n #extra_float_digits = 0 # min -15, max 3\n #client_encoding = sql_ascii # actually, defaults to database\n lc_messages = 'en_US.UTF-8' # locale for system\nerror message\n lc_monetary = 'en_US.UTF-8' # locale for monetary\nformatting\n lc_numeric = 'en_US.UTF-8' # locale for number\nformatting\n lc_time = 'en_US.UTF-8' # locale for time\nformatting\n default_text_search_config = 'pg_catalog.english'\n #dynamic_library_path = '$libdir'\n #local_preload_libraries = ''\n #deadlock_timeout = 1s\n #max_locks_per_transaction = 64 # min 10\n #max_pred_locks_per_transaction = 64 # min 10\n #array_nulls = on\n #backslash_quote = safe_encoding # on, off, or safe_encoding\n #default_with_oids = off\n #escape_string_warning = on\n #lo_compat_privileges = off\n #quote_all_identifiers = off\n #sql_inheritance = on\n #standard_conforming_strings = on\n #synchronize_seqscans = on\n #transform_null_equals = off\n #exit_on_error = off # terminate session on\nany error?\n #restart_after_crash = on # reinitialize after\nbackend crash?\n #custom_variable_classes = '' # list of custom variable class\nnames\n default_statistics_target = 50 # pgtune wizard 2013-09-24\n maintenance_work_mem = 960MB # pgtune wizard 2013-09-24\n constraint_exclusion = on # pgtune wizard 2013-09-24\n checkpoint_completion_target = 0.9 # pgtune wizard 2013-09-24\n effective_cache_size = 11GB # pgtune wizard 2013-09-24\n work_mem = 96MB # pgtune wizard 2013-09-24\n wal_buffers = 8MB # pgtune wizard 2013-09-24\n checkpoint_segments = 16 # pgtune wizard 2013-09-24\n shared_buffers = 3840MB # pgtune wizard 2013-09-24\n\n**$ cat /etc/pgbouncer/pgbouncer.ini**\n\n [databases]\n anfitrion = host=127.0.0.1 port=5432 dbname=**** user=****\npassword=**** client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'\n\n [pgbouncer]\n logfile = /var/log/postgresql/pgbouncer.log\n pidfile = /var/run/postgresql/pgbouncer.pid\n listen_addr = 127.0.0.1\n listen_port = 6432\n unix_socket_dir = /var/run/postgresql\n auth_type = trust\n auth_file = /etc/pgbouncer/userlist.txt\n ;admin_users = user2, someadmin, otheradmin\n ;stats_users = stats, root\n pool_mode = statement\n server_reset_query = DISCARD ALL\n ;ignore_startup_parameters = extra_float_digits\n ;server_check_query = select 1\n ;server_check_delay = 30\n ; total number of clients that can connect\n max_client_conn = 1000\n default_pool_size = 80\n ;reserve_pool_size = 5\n ;reserve_pool_timeout = 3\n ;log_connections = 1\n ;log_disconnections = 1\n ;log_pooler_errors = 1\n ;server_round_robin = 0\n ;server_lifetime = 1200\n ;server_idle_timeout = 60\n ;server_connect_timeout = 15\n ;server_login_retry = 15\n ;query_timeout = 0\n ;query_wait_timeout = 0\n ;client_idle_timeout = 0\n ;client_login_timeout = 60\n ;autodb_idle_timeout = 3600\n ;pkt_buf = 2048\n ;listen_backlog = 128\n ;tcp_defer_accept = 0\n ;tcp_socket_buffer = 0\n ;tcp_keepalive = 1\n ;tcp_keepcnt = 0\n ;tcp_keepidle = 0\n ;tcp_keepintvl = 0\n ;dns_max_ttl = 15\n ;dns_zone_check_period = 0\n\n**$ free -h**\n total used free shared buffers cached\nMem: 15G 11G 4.1G 0B 263M 10G\n-/+ buffers/cache: 1.2G 14G\nSwap: 30G 0B 30G\n\n\n**$ cat /proc/cpuinfo**\n\n processor : 0\n vendor_id : GenuineIntel\n cpu family : 6\n model : 58\n model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz\n stepping : 9\n microcode : 0x15\n cpu MHz : 3101.000\n cache size : 8192 KB\n physical id : 0\n siblings : 4\n core id : 0\n cpu cores : 4\n apicid : 0\n initial apicid : 0\n fpu : yes\n fpu_exception : yes\n cpuid level : 13\n wp : yes\n flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall\nnx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology\nnonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2\nssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer\naes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm\ntpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n bogomips : 6186.05\n clflush size : 64\n cache_alignment : 64\n address sizes : 36 bits physical, 48 bits virtual\n power management:\n processor : 1\n vendor_id : GenuineIntel\n cpu family : 6\n model : 58\n model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz\n stepping : 9\n microcode : 0x15\n cpu MHz : 3101.000\n cache size : 8192 KB\n physical id : 0\n siblings : 4\n core id : 1\n cpu cores : 4\n apicid : 2\n initial apicid : 2\n fpu : yes\n fpu_exception : yes\n cpuid level : 13\n wp : yes\n flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall\nnx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology\nnonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2\nssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer\naes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm\ntpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n bogomips : 6185.65\n clflush size : 64\n cache_alignment : 64\n address sizes : 36 bits physical, 48 bits virtual\n power management:\n processor : 2\n vendor_id : GenuineIntel\n cpu family : 6\n model : 58\n model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz\n stepping : 9\n microcode : 0x15\n cpu MHz : 3101.000\n cache size : 8192 KB\n physical id : 0\n siblings : 4\n core id : 2\n cpu cores : 4\n apicid : 4\n initial apicid : 4\n fpu : yes\n fpu_exception : yes\n cpuid level : 13\n wp : yes\n flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall\nnx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology\nnonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2\nssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer\naes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm\ntpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n bogomips : 6185.66\n clflush size : 64\n cache_alignment : 64\n address sizes : 36 bits physical, 48 bits virtual\n power management:\n\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n GNU Linux Admin | PHP Senior Web Developer\n Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n GTalk: [email protected] | Skype: csotelop\n MSN: [email protected] | Yahoo: csotelop\n GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n\nI need a help on postgresql performanceI have configurate my postgresql files for tunning my server, however it is slow and cpu resources are highter than 120%\nI have no idea on how to solve this issue, I was trying to search more infor on google but is not enough, I also have try autovacum sentences and reindex db, but it continues beeing slow\nMy app is a gps listener that insert more than 6000 records per minutes using a tcp server developed on python twisted, where there is no problems, the problem is when I try to follow the gps devices on a map on a relatime, I am doing queries each 6 seconds to my database from my django app, for request last position using a stored procedure, but the query get slow on more than 50 devices and cpu start to using more than 120% of its resources\nDjango App connect the postgres database directly, and tcp listener server for teh devices connect database on threaded way using pgbouncer, I have not using my django web app on pgbouncer caause I dont want to crash gps devices connection on the pgbouncer\nI hoe you could help on get a better performanceI am attaching my store procedure, my conf files and my cpu, memory information**Stored procedure**\n    CREATE OR REPLACE FUNCTION gps_get_live_location (    _imeis varchar(8)    )    RETURNS TABLE (    imei varchar,\n    device_id integer,    date_time_process timestamp with time zone,     latitude double precision, \n    longitude double precision,     course smallint,     speed smallint, \n    mileage integer,    gps_signal smallint,    gsm_signal smallint,\n    alarm_status boolean,    gsm_status boolean,    vehicle_status boolean,\n    alarm_over_speed boolean,    other text,    address varchar\n    ) AS $func$    DECLARE     arr varchar[];    BEGIN        arr := regexp_split_to_array(_imeis, E'\\\\s+');\n    FOR i IN 1..array_length(arr, 1) LOOP    RETURN QUERY     SELECT \n    gpstracking_device_tracks.imei,    gpstracking_device_tracks.device_id,     gpstracking_device_tracks.date_time_process,\n    gpstracking_device_tracks.latitude,    gpstracking_device_tracks.longitude,    gpstracking_device_tracks.course,\n    gpstracking_device_tracks.speed,    gpstracking_device_tracks.mileage,    gpstracking_device_tracks.gps_signal,\n    gpstracking_device_tracks.gsm_signal,    gpstracking_device_tracks.alarm_status,    gpstracking_device_tracks.gps_status,\n    gpstracking_device_tracks.vehicle_status,    gpstracking_device_tracks.alarm_over_speed,    gpstracking_device_tracks.other,\n    gpstracking_device_tracks.address    FROM gpstracking_device_tracks    WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n    AND gpstracking_device_tracks.date_time_process >= date_trunc('hour', now())     AND gpstracking_device_tracks.date_time_process <= NOW()\n    ORDER BY gpstracking_device_tracks.date_time_process DESC    LIMIT 1;    END LOOP;\n    RETURN;    END;    $func$     LANGUAGE plpgsql VOLATILE SECURITY DEFINER;**$ cat less /etc/sysctl.conf**\n    kernel.shmmax = 6871947673    kernel.shmall = 6871947673    fs.file-max = 4194304**$ cat /etc/postgresql/9.1/main/postgresql.conf**\n    data_directory = '/var/lib/postgresql/9.1/main'         # use data in another directory    hba_file = '/etc/postgresql/9.1/main/pg_hba.conf'       # host-based authentication file\n    ident_file = '/etc/postgresql/9.1/main/pg_ident.conf'   # ident configuration file    external_pid_file = '/var/run/postgresql/9.1-main.pid'          # write an extra PID file\n    listen_addresses = 'localhost'          # what IP address(es) to listen on;    port = 5432                             # (change requires restart)    max_connections = 80                    # (change requires restart)\n    superuser_reserved_connections = 3      # (change requires restart)    unix_socket_directory = '/var/run/postgresql'           # (change requires restart)    #unix_socket_group = ''                 # (change requires restart)\n    #unix_socket_permissions = 0777         # begin with 0 to use octal notation    #bonjour = off                          # advertise server via Bonjour    #bonjour_name = ''                      # defaults to the computer name\n    ssl = true                              # (change requires restart)    #ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH'      # allowed SSL ciphers    #ssl_renegotiation_limit = 512MB        # amount of data between renegotiations\n    #password_encryption = on    #db_user_namespace = off    #krb_server_keyfile = ''    #krb_srvname = 'postgres'               # (Kerberos only)    #krb_caseins_users = off\n    #tcp_keepalives_idle = 0                # TCP_KEEPIDLE, in seconds;    #tcp_keepalives_interval = 0            # TCP_KEEPINTVL, in seconds;    #tcp_keepalives_count = 0               # TCP_KEEPCNT;\n    # shared_buffers = 4096MB                       # min 128kB    temp_buffers = 16MB                     # min 800kB    # work_mem = 80MB                               # min 64kB    # maintenance_work_mem = 2048MB         # min 1MB\n    max_stack_depth = 4MB                   # min 100kB    #max_files_per_process = 1000           # min 25    #vacuum_cost_delay = 0ms                # 0-100 milliseconds    #vacuum_cost_page_hit = 1               # 0-10000 credits\n    #vacuum_cost_page_miss = 10             # 0-10000 credits    #vacuum_cost_page_dirty = 20            # 0-10000 credits    #vacuum_cost_limit = 200                # 1-10000 credits\n    #bgwriter_delay = 200ms                 # 10-10000ms between rounds    #bgwriter_lru_maxpages = 100            # 0-1000 max buffers written/round    #bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers scanned/round\n    #effective_io_concurrency = 1           # 1-1000. 0 disables prefetching    #wal_level = minimal                    # minimal, archive, or hot_standby    #fsync = on                             # turns forced synchronization on or off\n    #synchronous_commit = on                # synchronization level; on, off, or local    #wal_sync_method = fsync                # the default is the first option    #full_page_writes = on                  # recover from partial page writes\n    #wal_buffers = -1                       # min 32kB, -1 sets based on shared_buffers    #wal_writer_delay = 200ms               # 1-10000 milliseconds    #commit_delay = 0                       # range 0-100000, in microseconds\n    #commit_siblings = 5                    # range 1-1000    # checkpoint_segments = 64              # in logfile segments, min 1, 16MB each    checkpoint_timeout = 5min               # range 30s-1h\n    # checkpoint_completion_target = 0.5    # checkpoint target duration, 0.0 - 1.0    #checkpoint_warning = 30s               # 0 disables    #archive_mode = off             # allows archiving to be done\n    #archive_command = ''           # command to use to archive a logfile segment    #archive_timeout = 0            # force a logfile segment switch after this    #max_wal_senders = 0            # max number of walsender processes\n    #wal_sender_delay = 1s          # walsender cycle time, 1-10000 milliseconds    #wal_keep_segments = 0          # in logfile segments, 16MB each; 0 disables    #vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed\n    #replication_timeout = 60s      # in milliseconds; 0 disables    #synchronous_standby_names = '' # standby servers that provide sync rep    #hot_standby = off                      # \"on\" allows queries during recovery\n    #max_standby_archive_delay = 30s        # max delay before canceling queries    #max_standby_streaming_delay = 30s      # max delay before canceling queries    #wal_receiver_status_interval = 10s     # send replies at least this often\n    #hot_standby_feedback = off             # send info from standby to prevent    #enable_bitmapscan = on    #enable_hashagg = on    #enable_hashjoin = on    #enable_indexscan = on\n    #enable_material = on    #enable_mergejoin = on    #enable_nestloop = on    #enable_seqscan = on    #enable_sort = on    #enable_tidscan = on    #seq_page_cost = 1.0                    # measured on an arbitrary scale\n    #random_page_cost = 4.0                 # same scale as above    cpu_tuple_cost = 0.01                   # same scale as above    cpu_index_tuple_cost = 0.005            # same scale as above\n    #cpu_operator_cost = 0.0025             # same scale as above    # effective_cache_size = 8192MB    #geqo = on    #geqo_threshold = 12    #geqo_effort = 5                        # range 1-10\n    #geqo_pool_size = 0                     # selects default based on effort    #geqo_generations = 0                   # selects default based on effort    #geqo_selection_bias = 2.0              # range 1.5-2.0\n    #geqo_seed = 0.0                        # range 0.0-1.0    #default_statistics_target = 100        # range 1-10000    #constraint_exclusion = partition       # on, off, or partition\n    #cursor_tuple_fraction = 0.1            # range 0.0-1.0    #from_collapse_limit = 8    #join_collapse_limit = 8                # 1 disables collapsing of explicit    #log_destination = 'stderr'             # Valid values are combinations of\n    #logging_collector = off                # Enable capturing of stderr and csvlog    # These are only used if logging_collector is on:    #log_directory = 'pg_log'               # directory where log files are written,\n    #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'        # log file name pattern,    #log_file_mode = 0600                   # creation mode for log files,    #log_truncate_on_rotation = off         # If on, an existing log file with the\n    #log_rotation_age = 1d                  # Automatic rotation of logfiles will    #log_rotation_size = 10MB               # Automatic rotation of logfiles will    #syslog_facility = 'LOCAL0'\n    #syslog_ident = 'postgres'    #silent_mode = off                      # Run server silently.    #client_min_messages = notice           # values in order of decreasing detail:\n    #log_min_messages = warning             # values in order of decreasing detail:    #log_min_error_statement = error        # values in order of decreasing detail:    #log_min_duration_statement = -1        # -1 is disabled, 0 logs all statements\n    #debug_print_parse = off    #debug_print_rewritten = off    #debug_print_plan = off    #debug_pretty_print = on    #log_checkpoints = off    #log_connections = off\n    #log_disconnections = off    #log_duration = off    #log_error_verbosity = default          # terse, default, or verbose messages    #log_hostname = off    log_line_prefix = '%t '                 # special values:\n    #log_lock_waits = off                   # log lock waits >= deadlock_timeout    #log_statement = 'none'                 # none, ddl, mod, all    #log_temp_files = -1                    # log temporary files equal or larger\n    #log_timezone = '(defaults to server environment setting)'    #track_activities = on    #track_counts = on    #track_functions = none                 # none, pl, all\n    #track_activity_query_size = 1024       # (change requires restart)    #update_process_title = on    #stats_temp_directory = 'pg_stat_tmp'    #log_parser_stats = off\n    #log_planner_stats = off    #log_executor_stats = off    #log_statement_stats = off    #autovacuum = on                        # Enable autovacuum subprocess?  'on'\n    #log_autovacuum_min_duration = -1       # -1 disables, 0 logs all actions and    #autovacuum_max_workers = 3             # max number of autovacuum subprocesses    #autovacuum_naptime = 1min              # time between autovacuum runs\n    #autovacuum_vacuum_threshold = 50       # min number of row updates before    #autovacuum_analyze_threshold = 50      # min number of row updates before    #autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum\n    #autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze    #autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum    #autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for\n    #autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for    #search_path = '\"$user\",public'         # schema names    #default_tablespace = ''                # a tablespace name, '' uses the default\n    #temp_tablespaces = ''                  # a list of tablespace names, '' uses    #check_function_bodies = on    #default_transaction_isolation = 'read committed'\n    #default_transaction_read_only = off    #default_transaction_deferrable = off    #session_replication_role = 'origin'    #statement_timeout = 0                  # in milliseconds, 0 is disabled\n    #vacuum_freeze_min_age = 50000000    #vacuum_freeze_table_age = 150000000    #bytea_output = 'hex'                   # hex, escape    #xmlbinary = 'base64'\n    #xmloption = 'content'    datestyle = 'iso, mdy'    #intervalstyle = 'postgres'    #timezone = '(defaults to server environment setting)'    #timezone_abbreviations = 'Default'     # Select the set of available time zone\n    #extra_float_digits = 0                 # min -15, max 3    #client_encoding = sql_ascii            # actually, defaults to database    lc_messages = 'en_US.UTF-8'                     # locale for system error message\n    lc_monetary = 'en_US.UTF-8'                     # locale for monetary formatting    lc_numeric = 'en_US.UTF-8'                      # locale for number formatting    lc_time = 'en_US.UTF-8'                         # locale for time formatting\n    default_text_search_config = 'pg_catalog.english'    #dynamic_library_path = '$libdir'    #local_preload_libraries = ''    #deadlock_timeout = 1s\n    #max_locks_per_transaction = 64         # min 10    #max_pred_locks_per_transaction = 64    # min 10    #array_nulls = on    #backslash_quote = safe_encoding        # on, off, or safe_encoding\n    #default_with_oids = off    #escape_string_warning = on    #lo_compat_privileges = off    #quote_all_identifiers = off    #sql_inheritance = on    #standard_conforming_strings = on\n    #synchronize_seqscans = on    #transform_null_equals = off    #exit_on_error = off                            # terminate session on any error?    #restart_after_crash = on                       # reinitialize after backend crash?\n    #custom_variable_classes = ''           # list of custom variable class names    default_statistics_target = 50 # pgtune wizard 2013-09-24    maintenance_work_mem = 960MB # pgtune wizard 2013-09-24\n    constraint_exclusion = on # pgtune wizard 2013-09-24    checkpoint_completion_target = 0.9 # pgtune wizard 2013-09-24    effective_cache_size = 11GB # pgtune wizard 2013-09-24    work_mem = 96MB # pgtune wizard 2013-09-24\n    wal_buffers = 8MB # pgtune wizard 2013-09-24    checkpoint_segments = 16 # pgtune wizard 2013-09-24    shared_buffers = 3840MB # pgtune wizard 2013-09-24**$ cat /etc/pgbouncer/pgbouncer.ini**\n    [databases]    anfitrion = host=127.0.0.1 port=5432 dbname=**** user=**** password=**** client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'        [pgbouncer]\n    logfile = /var/log/postgresql/pgbouncer.log    pidfile = /var/run/postgresql/pgbouncer.pid    listen_addr = 127.0.0.1    listen_port = 6432    unix_socket_dir = /var/run/postgresql\n    auth_type = trust    auth_file = /etc/pgbouncer/userlist.txt    ;admin_users = user2, someadmin, otheradmin    ;stats_users = stats, root    pool_mode = statement\n    server_reset_query = DISCARD ALL    ;ignore_startup_parameters = extra_float_digits    ;server_check_query = select 1    ;server_check_delay = 30    ; total number of clients that can connect\n    max_client_conn = 1000    default_pool_size = 80    ;reserve_pool_size = 5    ;reserve_pool_timeout = 3    ;log_connections = 1    ;log_disconnections = 1\n    ;log_pooler_errors = 1    ;server_round_robin = 0    ;server_lifetime = 1200    ;server_idle_timeout = 60    ;server_connect_timeout = 15    ;server_login_retry = 15\n    ;query_timeout = 0    ;query_wait_timeout = 0    ;client_idle_timeout = 0    ;client_login_timeout = 60    ;autodb_idle_timeout = 3600    ;pkt_buf = 2048\n    ;listen_backlog = 128    ;tcp_defer_accept = 0    ;tcp_socket_buffer = 0    ;tcp_keepalive = 1    ;tcp_keepcnt = 0    ;tcp_keepidle = 0    ;tcp_keepintvl = 0\n    ;dns_max_ttl = 15    ;dns_zone_check_period = 0**$ free -h**             total       used       free     shared    buffers     cachedMem:           15G        11G       4.1G         0B       263M        10G\n-/+ buffers/cache:       1.2G        14GSwap:          30G         0B        30G**$ cat /proc/cpuinfo**    processor       : 0    vendor_id       : GenuineIntel\n    cpu family      : 6    model           : 58    model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz    stepping        : 9    microcode       : 0x15    cpu MHz         : 3101.000\n    cache size      : 8192 KB    physical id     : 0    siblings        : 4    core id         : 0    cpu cores       : 4    apicid          : 0    initial apicid  : 0\n    fpu             : yes    fpu_exception   : yes    cpuid level     : 13    wp              : yes    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n    bogomips        : 6186.05    clflush size    : 64    cache_alignment : 64    address sizes   : 36 bits physical, 48 bits virtual    power management:    processor       : 1\n    vendor_id       : GenuineIntel    cpu family      : 6    model           : 58    model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz    stepping        : 9\n    microcode       : 0x15    cpu MHz         : 3101.000    cache size      : 8192 KB    physical id     : 0    siblings        : 4    core id         : 1\n    cpu cores       : 4    apicid          : 2    initial apicid  : 2    fpu             : yes    fpu_exception   : yes    cpuid level     : 13    wp              : yes\n    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n    bogomips        : 6185.65    clflush size    : 64    cache_alignment : 64    address sizes   : 36 bits physical, 48 bits virtual    power management:    processor       : 2\n    vendor_id       : GenuineIntel    cpu family      : 6    model           : 58    model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz    stepping        : 9\n    microcode       : 0x15    cpu MHz         : 3101.000    cache size      : 8192 KB    physical id     : 0    siblings        : 4    core id         : 2\n    cpu cores       : 4    apicid          : 4    initial apicid  : 4    fpu             : yes    fpu_exception   : yes    cpuid level     : 13    wp              : yes\n    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n    bogomips        : 6185.66    clflush size    : 64    cache_alignment : 64    address sizes   : 36 bits physical, 48 bits virtual    power management:\n-- Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n    GNU Linux Admin | PHP Senior Web Developer\n    Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n    GTalk: [email protected] | Skype: csotelop\n    MSN: [email protected] | Yahoo: csotelop    GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B", "msg_date": "Mon, 30 Sep 2013 10:03:02 -0500", "msg_from": "Carlos Eduardo Sotelo Pinto <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Help_on_=E1=B9=95erformance?=" }, { "msg_contents": "Necesito una ayuda en el rendimiento postgresql\n\nHe configurado mis archivos PostgreSQL para hacer tunning en mi servidor,\nsin embargo, es lento y los recursos cpu son superiores al 120%\n\nMe he quedo sin ideas de cómo resolver este problema, yo estaba tratando de\nbuscar más información en google, pero no es suficiente, también he tratado\nautovacum y reindex db, pero sigue siendo lento\n\nMi aplicación es un oyente gps que inserta más de 6.000 registros por\nminuto mediante un servidor tcp desarrollado en python twisted, donde no\nhay problemas, el problema es cuando trato de seguir los dispositivos GPS\nen un mapa en una real time, estoy haciendo consultas cada 6 segundos a mi\nbase de datos desde mi aplicación django hacia la última posición mediante\nun procedimiento almacenado, pero la consulta es lento en más de 50\ndispositivos, y la CPU empieza a consumir más del 120% de sus recursos\n\nMi Django App se conecta a la base de datos postgres directamente, y el\nservidor escucha TCP para dispositivos se conecta a la base de datos con\npgbouncer, no he utilizando mi aplicación web django en pgbouncer porque no\nquiero cruzar la conexión de dispositivos gps en el pgbouncer\n\nAdjunto mi proceso de grabación, mis conf y mi cpu, información de memoria,\nespero puedan ayudarme\n\n\n**Stored procedure**\n\n CREATE OR REPLACE FUNCTION gps_get_live_location (\n _imeis varchar(8)\n )\n RETURNS TABLE (\n imei varchar,\n device_id integer,\n date_time_process timestamp with time zone,\n latitude double precision,\n longitude double precision,\n course smallint,\n speed smallint,\n mileage integer,\n gps_signal smallint,\n gsm_signal smallint,\n alarm_status boolean,\n gsm_status boolean,\n vehicle_status boolean,\n alarm_over_speed boolean,\n other text,\n address varchar\n ) AS $func$\n DECLARE\n arr varchar[];\n BEGIN\n arr := regexp_split_to_array(_imeis, E'\\\\s+');\n FOR i IN 1..array_length(arr, 1) LOOP\n RETURN QUERY\n SELECT\n gpstracking_device_tracks.imei,\n gpstracking_device_tracks.device_id,\n gpstracking_device_tracks.date_time_process,\n gpstracking_device_tracks.latitude,\n gpstracking_device_tracks.longitude,\n gpstracking_device_tracks.course,\n gpstracking_device_tracks.speed,\n gpstracking_device_tracks.mileage,\n gpstracking_device_tracks.gps_signal,\n gpstracking_device_tracks.gsm_signal,\n gpstracking_device_tracks.alarm_status,\n gpstracking_device_tracks.gps_status,\n gpstracking_device_tracks.vehicle_status,\n gpstracking_device_tracks.alarm_over_speed,\n gpstracking_device_tracks.other,\n gpstracking_device_tracks.address\n FROM gpstracking_device_tracks\n WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\nnow())\n AND gpstracking_device_tracks.date_time_process <= NOW()\n ORDER BY gpstracking_device_tracks.date_time_process DESC\n LIMIT 1;\n END LOOP;\n RETURN;\n END;\n $func$\n LANGUAGE plpgsql VOLATILE SECURITY DEFINER;\n\n**$ cat less /etc/sysctl.conf**\n\n kernel.shmmax = 6871947673\n kernel.shmall = 6871947673\n fs.file-max = 4194304\n\n**$ cat /etc/postgresql/9.1/main/postgresql.conf**\n\n data_directory = '/var/lib/postgresql/9.1/main' # use data in\nanother directory\n hba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based\nauthentication file\n ident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident\nconfiguration file\n external_pid_file = '/var/run/postgresql/9.1-main.pid' # write\nan extra PID file\n listen_addresses = 'localhost' # what IP address(es) to listen\non;\n port = 5432 # (change requires restart)\n max_connections = 80 # (change requires restart)\n superuser_reserved_connections = 3 # (change requires restart)\n unix_socket_directory = '/var/run/postgresql' # (change\nrequires restart)\n #unix_socket_group = '' # (change requires restart)\n #unix_socket_permissions = 0777 # begin with 0 to use octal\nnotation\n #bonjour = off # advertise server via Bonjour\n #bonjour_name = '' # defaults to the computer name\n ssl = true # (change requires restart)\n #ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL\nciphers\n #ssl_renegotiation_limit = 512MB # amount of data between\nrenegotiations\n #password_encryption = on\n #db_user_namespace = off\n #krb_server_keyfile = ''\n #krb_srvname = 'postgres' # (Kerberos only)\n #krb_caseins_users = off\n #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n #tcp_keepalives_count = 0 # TCP_KEEPCNT;\n # shared_buffers = 4096MB # min 128kB\n temp_buffers = 16MB # min 800kB\n # work_mem = 80MB # min 64kB\n # maintenance_work_mem = 2048MB # min 1MB\n max_stack_depth = 4MB # min 100kB\n #max_files_per_process = 1000 # min 25\n #vacuum_cost_delay = 0ms # 0-100 milliseconds\n #vacuum_cost_page_hit = 1 # 0-10000 credits\n #vacuum_cost_page_miss = 10 # 0-10000 credits\n #vacuum_cost_page_dirty = 20 # 0-10000 credits\n #vacuum_cost_limit = 200 # 1-10000 credits\n #bgwriter_delay = 200ms # 10-10000ms between rounds\n #bgwriter_lru_maxpages = 100 # 0-1000 max buffers\nwritten/round\n #bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\nscanned/round\n #effective_io_concurrency = 1 # 1-1000. 0 disables prefetching\n #wal_level = minimal # minimal, archive, or\nhot_standby\n #fsync = on # turns forced synchronization\non or off\n #synchronous_commit = on # synchronization level; on,\noff, or local\n #wal_sync_method = fsync # the default is the first\noption\n #full_page_writes = on # recover from partial page\nwrites\n #wal_buffers = -1 # min 32kB, -1 sets based on\nshared_buffers\n #wal_writer_delay = 200ms # 1-10000 milliseconds\n #commit_delay = 0 # range 0-100000, in\nmicroseconds\n #commit_siblings = 5 # range 1-1000\n # checkpoint_segments = 64 # in logfile segments, min 1,\n16MB each\n checkpoint_timeout = 5min # range 30s-1h\n # checkpoint_completion_target = 0.5 # checkpoint target duration,\n0.0 - 1.0\n #checkpoint_warning = 30s # 0 disables\n #archive_mode = off # allows archiving to be done\n #archive_command = '' # command to use to archive a logfile\nsegment\n #archive_timeout = 0 # force a logfile segment switch after\nthis\n #max_wal_senders = 0 # max number of walsender processes\n #wal_sender_delay = 1s # walsender cycle time, 1-10000\nmilliseconds\n #wal_keep_segments = 0 # in logfile segments, 16MB each; 0\ndisables\n #vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is\ndelayed\n #replication_timeout = 60s # in milliseconds; 0 disables\n #synchronous_standby_names = '' # standby servers that provide sync rep\n #hot_standby = off # \"on\" allows queries during\nrecovery\n #max_standby_archive_delay = 30s # max delay before canceling\nqueries\n #max_standby_streaming_delay = 30s # max delay before canceling\nqueries\n #wal_receiver_status_interval = 10s # send replies at least this\noften\n #hot_standby_feedback = off # send info from standby to\nprevent\n #enable_bitmapscan = on\n #enable_hashagg = on\n #enable_hashjoin = on\n #enable_indexscan = on\n #enable_material = on\n #enable_mergejoin = on\n #enable_nestloop = on\n #enable_seqscan = on\n #enable_sort = on\n #enable_tidscan = on\n #seq_page_cost = 1.0 # measured on an arbitrary scale\n #random_page_cost = 4.0 # same scale as above\n cpu_tuple_cost = 0.01 # same scale as above\n cpu_index_tuple_cost = 0.005 # same scale as above\n #cpu_operator_cost = 0.0025 # same scale as above\n # effective_cache_size = 8192MB\n #geqo = on\n #geqo_threshold = 12\n #geqo_effort = 5 # range 1-10\n #geqo_pool_size = 0 # selects default based on\neffort\n #geqo_generations = 0 # selects default based on\neffort\n #geqo_selection_bias = 2.0 # range 1.5-2.0\n #geqo_seed = 0.0 # range 0.0-1.0\n #default_statistics_target = 100 # range 1-10000\n #constraint_exclusion = partition # on, off, or partition\n #cursor_tuple_fraction = 0.1 # range 0.0-1.0\n #from_collapse_limit = 8\n #join_collapse_limit = 8 # 1 disables collapsing of\nexplicit\n #log_destination = 'stderr' # Valid values are combinations\nof\n #logging_collector = off # Enable capturing of stderr\nand csvlog\n # These are only used if logging_collector is on:\n #log_directory = 'pg_log' # directory where log files are\nwritten,\n #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name\npattern,\n #log_file_mode = 0600 # creation mode for log files,\n #log_truncate_on_rotation = off # If on, an existing log file\nwith the\n #log_rotation_age = 1d # Automatic rotation of\nlogfiles will\n #log_rotation_size = 10MB # Automatic rotation of\nlogfiles will\n #syslog_facility = 'LOCAL0'\n #syslog_ident = 'postgres'\n #silent_mode = off # Run server silently.\n #client_min_messages = notice # values in order of decreasing\ndetail:\n #log_min_messages = warning # values in order of decreasing\ndetail:\n #log_min_error_statement = error # values in order of decreasing\ndetail:\n #log_min_duration_statement = -1 # -1 is disabled, 0 logs all\nstatements\n #debug_print_parse = off\n #debug_print_rewritten = off\n #debug_print_plan = off\n #debug_pretty_print = on\n #log_checkpoints = off\n #log_connections = off\n #log_disconnections = off\n #log_duration = off\n #log_error_verbosity = default # terse, default, or verbose\nmessages\n #log_hostname = off\n log_line_prefix = '%t ' # special values:\n #log_lock_waits = off # log lock waits >=\ndeadlock_timeout\n #log_statement = 'none' # none, ddl, mod, all\n #log_temp_files = -1 # log temporary files equal or\nlarger\n #log_timezone = '(defaults to server environment setting)'\n #track_activities = on\n #track_counts = on\n #track_functions = none # none, pl, all\n #track_activity_query_size = 1024 # (change requires restart)\n #update_process_title = on\n #stats_temp_directory = 'pg_stat_tmp'\n #log_parser_stats = off\n #log_planner_stats = off\n #log_executor_stats = off\n #log_statement_stats = off\n #autovacuum = on # Enable autovacuum subprocess?\n 'on'\n #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all\nactions and\n #autovacuum_max_workers = 3 # max number of autovacuum\nsubprocesses\n #autovacuum_naptime = 1min # time between autovacuum runs\n #autovacuum_vacuum_threshold = 50 # min number of row updates\nbefore\n #autovacuum_analyze_threshold = 50 # min number of row updates\nbefore\n #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\nvacuum\n #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before\nanalyze\n #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced\nvacuum\n #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n #search_path = '\"$user\",public' # schema names\n #default_tablespace = '' # a tablespace name, '' uses\nthe default\n #temp_tablespaces = '' # a list of tablespace names,\n'' uses\n #check_function_bodies = on\n #default_transaction_isolation = 'read committed'\n #default_transaction_read_only = off\n #default_transaction_deferrable = off\n #session_replication_role = 'origin'\n #statement_timeout = 0 # in milliseconds, 0 is disabled\n #vacuum_freeze_min_age = 50000000\n #vacuum_freeze_table_age = 150000000\n #bytea_output = 'hex' # hex, escape\n #xmlbinary = 'base64'\n #xmloption = 'content'\n datestyle = 'iso, mdy'\n #intervalstyle = 'postgres'\n #timezone = '(defaults to server environment setting)'\n #timezone_abbreviations = 'Default' # Select the set of available\ntime zone\n #extra_float_digits = 0 # min -15, max 3\n #client_encoding = sql_ascii # actually, defaults to database\n lc_messages = 'en_US.UTF-8' # locale for system\nerror message\n lc_monetary = 'en_US.UTF-8' # locale for monetary\nformatting\n lc_numeric = 'en_US.UTF-8' # locale for number\nformatting\n lc_time = 'en_US.UTF-8' # locale for time\nformatting\n default_text_search_config = 'pg_catalog.english'\n #dynamic_library_path = '$libdir'\n #local_preload_libraries = ''\n #deadlock_timeout = 1s\n #max_locks_per_transaction = 64 # min 10\n #max_pred_locks_per_transaction = 64 # min 10\n #array_nulls = on\n #backslash_quote = safe_encoding # on, off, or safe_encoding\n #default_with_oids = off\n #escape_string_warning = on\n #lo_compat_privileges = off\n #quote_all_identifiers = off\n #sql_inheritance = on\n #standard_conforming_strings = on\n #synchronize_seqscans = on\n #transform_null_equals = off\n #exit_on_error = off # terminate session on\nany error?\n #restart_after_crash = on # reinitialize after\nbackend crash?\n #custom_variable_classes = '' # list of custom variable class\nnames\n default_statistics_target = 50 # pgtune wizard 2013-09-24\n maintenance_work_mem = 960MB # pgtune wizard 2013-09-24\n constraint_exclusion = on # pgtune wizard 2013-09-24\n checkpoint_completion_target = 0.9 # pgtune wizard 2013-09-24\n effective_cache_size = 11GB # pgtune wizard 2013-09-24\n work_mem = 96MB # pgtune wizard 2013-09-24\n wal_buffers = 8MB # pgtune wizard 2013-09-24\n checkpoint_segments = 16 # pgtune wizard 2013-09-24\n shared_buffers = 3840MB # pgtune wizard 2013-09-24\n\n**$ cat /etc/pgbouncer/pgbouncer.ini**\n\n [databases]\n anfitrion = host=127.0.0.1 port=5432 dbname=**** user=****\npassword=**** client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'\n\n [pgbouncer]\n logfile = /var/log/postgresql/pgbouncer.log\n pidfile = /var/run/postgresql/pgbouncer.pid\n listen_addr = 127.0.0.1\n listen_port = 6432\n unix_socket_dir = /var/run/postgresql\n auth_type = trust\n auth_file = /etc/pgbouncer/userlist.txt\n ;admin_users = user2, someadmin, otheradmin\n ;stats_users = stats, root\n pool_mode = statement\n server_reset_query = DISCARD ALL\n ;ignore_startup_parameters = extra_float_digits\n ;server_check_query = select 1\n ;server_check_delay = 30\n ; total number of clients that can connect\n max_client_conn = 1000\n default_pool_size = 80\n ;reserve_pool_size = 5\n ;reserve_pool_timeout = 3\n ;log_connections = 1\n ;log_disconnections = 1\n ;log_pooler_errors = 1\n ;server_round_robin = 0\n ;server_lifetime = 1200\n ;server_idle_timeout = 60\n ;server_connect_timeout = 15\n ;server_login_retry = 15\n ;query_timeout = 0\n ;query_wait_timeout = 0\n ;client_idle_timeout = 0\n ;client_login_timeout = 60\n ;autodb_idle_timeout = 3600\n ;pkt_buf = 2048\n ;listen_backlog = 128\n ;tcp_defer_accept = 0\n ;tcp_socket_buffer = 0\n ;tcp_keepalive = 1\n ;tcp_keepcnt = 0\n ;tcp_keepidle = 0\n ;tcp_keepintvl = 0\n ;dns_max_ttl = 15\n ;dns_zone_check_period = 0\n\n**$ free -h**\n total used free shared buffers cached\nMem: 15G 11G 4.1G 0B 263M 10G\n-/+ buffers/cache: 1.2G 14G\nSwap: 30G 0B 30G\n\n\n**$ cat /proc/cpuinfo**\n\n processor : 0\n vendor_id : GenuineIntel\n cpu family : 6\n model : 58\n model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz\n stepping : 9\n microcode : 0x15\n cpu MHz : 3101.000\n cache size : 8192 KB\n physical id : 0\n siblings : 4\n core id : 0\n cpu cores : 4\n apicid : 0\n initial apicid : 0\n fpu : yes\n fpu_exception : yes\n cpuid level : 13\n wp : yes\n flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall\nnx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology\nnonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2\nssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer\naes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm\ntpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n bogomips : 6186.05\n clflush size : 64\n cache_alignment : 64\n address sizes : 36 bits physical, 48 bits virtual\n power management:\n processor : 1\n vendor_id : GenuineIntel\n cpu family : 6\n model : 58\n model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz\n stepping : 9\n microcode : 0x15\n cpu MHz : 3101.000\n cache size : 8192 KB\n physical id : 0\n siblings : 4\n core id : 1\n cpu cores : 4\n apicid : 2\n initial apicid : 2\n fpu : yes\n fpu_exception : yes\n cpuid level : 13\n wp : yes\n flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall\nnx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology\nnonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2\nssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer\naes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm\ntpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n bogomips : 6185.65\n clflush size : 64\n cache_alignment : 64\n address sizes : 36 bits physical, 48 bits virtual\n power management:\n processor : 2\n vendor_id : GenuineIntel\n cpu family : 6\n model : 58\n model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz\n stepping : 9\n microcode : 0x15\n cpu MHz : 3101.000\n cache size : 8192 KB\n physical id : 0\n siblings : 4\n core id : 2\n cpu cores : 4\n apicid : 4\n initial apicid : 4\n fpu : yes\n fpu_exception : yes\n cpuid level : 13\n wp : yes\n flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall\nnx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology\nnonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2\nssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer\naes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm\ntpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n bogomips : 6185.66\n clflush size : 64\n cache_alignment : 64\n address sizes : 36 bits physical, 48 bits virtual\n power management:\n\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n GNU Linux Admin | PHP Senior Web Developer\n Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n GTalk: [email protected] | Skype: csotelop\n MSN: [email protected] | Yahoo: csotelop\n GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n\n\n\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n GNU Linux Admin | PHP Senior Web Developer\n Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n GTalk: [email protected] | Skype: csotelop\n MSN: [email protected] | Yahoo: csotelop\n GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n\nNecesito una ayuda en el rendimiento postgresqlHe configurado mis archivos PostgreSQL para hacer tunning en mi servidor, sin embargo, es lento y los recursos cpu son superiores al 120%\nMe he quedo sin ideas de cómo resolver este problema, yo estaba tratando de buscar más información en google, pero no es suficiente, también he tratado autovacum y reindex db, pero sigue siendo lento\nMi aplicación es un oyente gps que inserta más de 6.000 registros por minuto mediante un servidor tcp desarrollado en python twisted, donde no hay problemas, el problema es cuando trato de seguir los dispositivos GPS en un mapa en una real time, estoy haciendo consultas cada 6 segundos a mi base de datos desde mi aplicación django hacia la última posición mediante un procedimiento almacenado, pero la consulta es lento en más de 50 dispositivos, y la CPU empieza a consumir más del 120% de sus recursos\nMi Django App se conecta a la base de datos postgres directamente, y el servidor escucha TCP para dispositivos se conecta a la base de datos con pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque no quiero cruzar la conexión de dispositivos gps en el pgbouncer\nAdjunto mi proceso de grabación, mis conf y mi cpu, información de memoria, espero puedan ayudarme**Stored procedure**\n    CREATE OR REPLACE FUNCTION gps_get_live_location (    _imeis varchar(8)    )    RETURNS TABLE (    imei varchar,\n    device_id integer,    date_time_process timestamp with time zone,     latitude double precision, \n    longitude double precision,     course smallint,     speed smallint, \n    mileage integer,    gps_signal smallint,    gsm_signal smallint,\n    alarm_status boolean,    gsm_status boolean,    vehicle_status boolean,\n    alarm_over_speed boolean,    other text,    address varchar\n    ) AS $func$    DECLARE     arr varchar[];    BEGIN        arr := regexp_split_to_array(_imeis, E'\\\\s+');\n    FOR i IN 1..array_length(arr, 1) LOOP    RETURN QUERY     SELECT \n    gpstracking_device_tracks.imei,    gpstracking_device_tracks.device_id,     gpstracking_device_tracks.date_time_process,\n    gpstracking_device_tracks.latitude,    gpstracking_device_tracks.longitude,    gpstracking_device_tracks.course,\n    gpstracking_device_tracks.speed,    gpstracking_device_tracks.mileage,    gpstracking_device_tracks.gps_signal,\n    gpstracking_device_tracks.gsm_signal,    gpstracking_device_tracks.alarm_status,    gpstracking_device_tracks.gps_status,\n    gpstracking_device_tracks.vehicle_status,    gpstracking_device_tracks.alarm_over_speed,    gpstracking_device_tracks.other,\n    gpstracking_device_tracks.address    FROM gpstracking_device_tracks    WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n    AND gpstracking_device_tracks.date_time_process >= date_trunc('hour', now())     AND gpstracking_device_tracks.date_time_process <= NOW()\n    ORDER BY gpstracking_device_tracks.date_time_process DESC    LIMIT 1;    END LOOP;\n    RETURN;    END;    $func$     LANGUAGE plpgsql VOLATILE SECURITY DEFINER;**$ cat less /etc/sysctl.conf**\n    kernel.shmmax = 6871947673    kernel.shmall = 6871947673    fs.file-max = 4194304**$ cat /etc/postgresql/9.1/main/postgresql.conf**\n    data_directory = '/var/lib/postgresql/9.1/main'         # use data in another directory    hba_file = '/etc/postgresql/9.1/main/pg_hba.conf'       # host-based authentication file\n    ident_file = '/etc/postgresql/9.1/main/pg_ident.conf'   # ident configuration file    external_pid_file = '/var/run/postgresql/9.1-main.pid'          # write an extra PID file\n\n    listen_addresses = 'localhost'          # what IP address(es) to listen on;    port = 5432                             # (change requires restart)    max_connections = 80                    # (change requires restart)\n    superuser_reserved_connections = 3      # (change requires restart)    unix_socket_directory = '/var/run/postgresql'           # (change requires restart)    #unix_socket_group = ''                 # (change requires restart)\n    #unix_socket_permissions = 0777         # begin with 0 to use octal notation    #bonjour = off                          # advertise server via Bonjour    #bonjour_name = ''                      # defaults to the computer name\n    ssl = true                              # (change requires restart)    #ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH'      # allowed SSL ciphers    #ssl_renegotiation_limit = 512MB        # amount of data between renegotiations\n    #password_encryption = on    #db_user_namespace = off    #krb_server_keyfile = ''    #krb_srvname = 'postgres'               # (Kerberos only)    #krb_caseins_users = off\n    #tcp_keepalives_idle = 0                # TCP_KEEPIDLE, in seconds;    #tcp_keepalives_interval = 0            # TCP_KEEPINTVL, in seconds;    #tcp_keepalives_count = 0               # TCP_KEEPCNT;\n    # shared_buffers = 4096MB                       # min 128kB    temp_buffers = 16MB                     # min 800kB    # work_mem = 80MB                               # min 64kB    # maintenance_work_mem = 2048MB         # min 1MB\n    max_stack_depth = 4MB                   # min 100kB    #max_files_per_process = 1000           # min 25    #vacuum_cost_delay = 0ms                # 0-100 milliseconds    #vacuum_cost_page_hit = 1               # 0-10000 credits\n    #vacuum_cost_page_miss = 10             # 0-10000 credits    #vacuum_cost_page_dirty = 20            # 0-10000 credits    #vacuum_cost_limit = 200                # 1-10000 credits\n\n    #bgwriter_delay = 200ms                 # 10-10000ms between rounds    #bgwriter_lru_maxpages = 100            # 0-1000 max buffers written/round    #bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers scanned/round\n    #effective_io_concurrency = 1           # 1-1000. 0 disables prefetching    #wal_level = minimal                    # minimal, archive, or hot_standby    #fsync = on                             # turns forced synchronization on or off\n    #synchronous_commit = on                # synchronization level; on, off, or local    #wal_sync_method = fsync                # the default is the first option    #full_page_writes = on                  # recover from partial page writes\n    #wal_buffers = -1                       # min 32kB, -1 sets based on shared_buffers    #wal_writer_delay = 200ms               # 1-10000 milliseconds    #commit_delay = 0                       # range 0-100000, in microseconds\n    #commit_siblings = 5                    # range 1-1000    # checkpoint_segments = 64              # in logfile segments, min 1, 16MB each    checkpoint_timeout = 5min               # range 30s-1h\n    # checkpoint_completion_target = 0.5    # checkpoint target duration, 0.0 - 1.0    #checkpoint_warning = 30s               # 0 disables    #archive_mode = off             # allows archiving to be done\n    #archive_command = ''           # command to use to archive a logfile segment    #archive_timeout = 0            # force a logfile segment switch after this    #max_wal_senders = 0            # max number of walsender processes\n    #wal_sender_delay = 1s          # walsender cycle time, 1-10000 milliseconds    #wal_keep_segments = 0          # in logfile segments, 16MB each; 0 disables    #vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed\n    #replication_timeout = 60s      # in milliseconds; 0 disables    #synchronous_standby_names = '' # standby servers that provide sync rep    #hot_standby = off                      # \"on\" allows queries during recovery\n    #max_standby_archive_delay = 30s        # max delay before canceling queries    #max_standby_streaming_delay = 30s      # max delay before canceling queries    #wal_receiver_status_interval = 10s     # send replies at least this often\n    #hot_standby_feedback = off             # send info from standby to prevent    #enable_bitmapscan = on    #enable_hashagg = on    #enable_hashjoin = on    #enable_indexscan = on\n    #enable_material = on    #enable_mergejoin = on    #enable_nestloop = on    #enable_seqscan = on    #enable_sort = on    #enable_tidscan = on    #seq_page_cost = 1.0                    # measured on an arbitrary scale\n    #random_page_cost = 4.0                 # same scale as above    cpu_tuple_cost = 0.01                   # same scale as above    cpu_index_tuple_cost = 0.005            # same scale as above\n    #cpu_operator_cost = 0.0025             # same scale as above    # effective_cache_size = 8192MB    #geqo = on    #geqo_threshold = 12    #geqo_effort = 5                        # range 1-10\n    #geqo_pool_size = 0                     # selects default based on effort    #geqo_generations = 0                   # selects default based on effort    #geqo_selection_bias = 2.0              # range 1.5-2.0\n    #geqo_seed = 0.0                        # range 0.0-1.0    #default_statistics_target = 100        # range 1-10000    #constraint_exclusion = partition       # on, off, or partition\n\n    #cursor_tuple_fraction = 0.1            # range 0.0-1.0    #from_collapse_limit = 8    #join_collapse_limit = 8                # 1 disables collapsing of explicit    #log_destination = 'stderr'             # Valid values are combinations of\n    #logging_collector = off                # Enable capturing of stderr and csvlog    # These are only used if logging_collector is on:    #log_directory = 'pg_log'               # directory where log files are written,\n    #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'        # log file name pattern,    #log_file_mode = 0600                   # creation mode for log files,    #log_truncate_on_rotation = off         # If on, an existing log file with the\n    #log_rotation_age = 1d                  # Automatic rotation of logfiles will    #log_rotation_size = 10MB               # Automatic rotation of logfiles will    #syslog_facility = 'LOCAL0'\n    #syslog_ident = 'postgres'    #silent_mode = off                      # Run server silently.    #client_min_messages = notice           # values in order of decreasing detail:\n\n    #log_min_messages = warning             # values in order of decreasing detail:    #log_min_error_statement = error        # values in order of decreasing detail:    #log_min_duration_statement = -1        # -1 is disabled, 0 logs all statements\n    #debug_print_parse = off    #debug_print_rewritten = off    #debug_print_plan = off    #debug_pretty_print = on    #log_checkpoints = off    #log_connections = off\n    #log_disconnections = off    #log_duration = off    #log_error_verbosity = default          # terse, default, or verbose messages    #log_hostname = off    log_line_prefix = '%t '                 # special values:\n    #log_lock_waits = off                   # log lock waits >= deadlock_timeout    #log_statement = 'none'                 # none, ddl, mod, all    #log_temp_files = -1                    # log temporary files equal or larger\n    #log_timezone = '(defaults to server environment setting)'    #track_activities = on    #track_counts = on    #track_functions = none                 # none, pl, all\n    #track_activity_query_size = 1024       # (change requires restart)    #update_process_title = on    #stats_temp_directory = 'pg_stat_tmp'    #log_parser_stats = off\n    #log_planner_stats = off    #log_executor_stats = off    #log_statement_stats = off    #autovacuum = on                        # Enable autovacuum subprocess?  'on'\n\n    #log_autovacuum_min_duration = -1       # -1 disables, 0 logs all actions and    #autovacuum_max_workers = 3             # max number of autovacuum subprocesses    #autovacuum_naptime = 1min              # time between autovacuum runs\n    #autovacuum_vacuum_threshold = 50       # min number of row updates before    #autovacuum_analyze_threshold = 50      # min number of row updates before    #autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum\n    #autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze    #autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum    #autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for\n    #autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for    #search_path = '\"$user\",public'         # schema names    #default_tablespace = ''                # a tablespace name, '' uses the default\n    #temp_tablespaces = ''                  # a list of tablespace names, '' uses    #check_function_bodies = on    #default_transaction_isolation = 'read committed'\n    #default_transaction_read_only = off    #default_transaction_deferrable = off    #session_replication_role = 'origin'    #statement_timeout = 0                  # in milliseconds, 0 is disabled\n    #vacuum_freeze_min_age = 50000000    #vacuum_freeze_table_age = 150000000    #bytea_output = 'hex'                   # hex, escape    #xmlbinary = 'base64'\n\n    #xmloption = 'content'    datestyle = 'iso, mdy'    #intervalstyle = 'postgres'    #timezone = '(defaults to server environment setting)'    #timezone_abbreviations = 'Default'     # Select the set of available time zone\n    #extra_float_digits = 0                 # min -15, max 3    #client_encoding = sql_ascii            # actually, defaults to database    lc_messages = 'en_US.UTF-8'                     # locale for system error message\n    lc_monetary = 'en_US.UTF-8'                     # locale for monetary formatting    lc_numeric = 'en_US.UTF-8'                      # locale for number formatting    lc_time = 'en_US.UTF-8'                         # locale for time formatting\n    default_text_search_config = 'pg_catalog.english'    #dynamic_library_path = '$libdir'    #local_preload_libraries = ''    #deadlock_timeout = 1s\n\n    #max_locks_per_transaction = 64         # min 10    #max_pred_locks_per_transaction = 64    # min 10    #array_nulls = on    #backslash_quote = safe_encoding        # on, off, or safe_encoding\n    #default_with_oids = off    #escape_string_warning = on    #lo_compat_privileges = off    #quote_all_identifiers = off    #sql_inheritance = on    #standard_conforming_strings = on\n    #synchronize_seqscans = on    #transform_null_equals = off    #exit_on_error = off                            # terminate session on any error?    #restart_after_crash = on                       # reinitialize after backend crash?\n    #custom_variable_classes = ''           # list of custom variable class names    default_statistics_target = 50 # pgtune wizard 2013-09-24    maintenance_work_mem = 960MB # pgtune wizard 2013-09-24\n    constraint_exclusion = on # pgtune wizard 2013-09-24    checkpoint_completion_target = 0.9 # pgtune wizard 2013-09-24    effective_cache_size = 11GB # pgtune wizard 2013-09-24    work_mem = 96MB # pgtune wizard 2013-09-24\n    wal_buffers = 8MB # pgtune wizard 2013-09-24    checkpoint_segments = 16 # pgtune wizard 2013-09-24    shared_buffers = 3840MB # pgtune wizard 2013-09-24**$ cat /etc/pgbouncer/pgbouncer.ini**\n    [databases]    anfitrion = host=127.0.0.1 port=5432 dbname=**** user=**** password=**** client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'        [pgbouncer]\n    logfile = /var/log/postgresql/pgbouncer.log    pidfile = /var/run/postgresql/pgbouncer.pid    listen_addr = 127.0.0.1    listen_port = 6432    unix_socket_dir = /var/run/postgresql\n    auth_type = trust    auth_file = /etc/pgbouncer/userlist.txt    ;admin_users = user2, someadmin, otheradmin    ;stats_users = stats, root    pool_mode = statement\n    server_reset_query = DISCARD ALL    ;ignore_startup_parameters = extra_float_digits    ;server_check_query = select 1    ;server_check_delay = 30    ; total number of clients that can connect\n    max_client_conn = 1000    default_pool_size = 80    ;reserve_pool_size = 5    ;reserve_pool_timeout = 3    ;log_connections = 1    ;log_disconnections = 1\n    ;log_pooler_errors = 1    ;server_round_robin = 0    ;server_lifetime = 1200    ;server_idle_timeout = 60    ;server_connect_timeout = 15    ;server_login_retry = 15\n    ;query_timeout = 0    ;query_wait_timeout = 0    ;client_idle_timeout = 0    ;client_login_timeout = 60    ;autodb_idle_timeout = 3600    ;pkt_buf = 2048\n    ;listen_backlog = 128    ;tcp_defer_accept = 0    ;tcp_socket_buffer = 0    ;tcp_keepalive = 1    ;tcp_keepcnt = 0    ;tcp_keepidle = 0    ;tcp_keepintvl = 0\n    ;dns_max_ttl = 15    ;dns_zone_check_period = 0**$ free -h**             total       used       free     shared    buffers     cachedMem:           15G        11G       4.1G         0B       263M        10G\n-/+ buffers/cache:       1.2G        14GSwap:          30G         0B        30G**$ cat /proc/cpuinfo**    processor       : 0\n    vendor_id       : GenuineIntel\n    cpu family      : 6    model           : 58    model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz    stepping        : 9    microcode       : 0x15    cpu MHz         : 3101.000\n    cache size      : 8192 KB    physical id     : 0    siblings        : 4    core id         : 0    cpu cores       : 4    apicid          : 0    initial apicid  : 0\n    fpu             : yes    fpu_exception   : yes    cpuid level     : 13    wp              : yes    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n    bogomips        : 6186.05    clflush size    : 64    cache_alignment : 64    address sizes   : 36 bits physical, 48 bits virtual    power management:    processor       : 1\n    vendor_id       : GenuineIntel    cpu family      : 6    model           : 58    model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz    stepping        : 9\n    microcode       : 0x15    cpu MHz         : 3101.000    cache size      : 8192 KB    physical id     : 0    siblings        : 4    core id         : 1\n\n    cpu cores       : 4    apicid          : 2    initial apicid  : 2    fpu             : yes    fpu_exception   : yes    cpuid level     : 13    wp              : yes\n    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n    bogomips        : 6185.65    clflush size    : 64    cache_alignment : 64    address sizes   : 36 bits physical, 48 bits virtual    power management:    processor       : 2\n    vendor_id       : GenuineIntel    cpu family      : 6    model           : 58    model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz    stepping        : 9\n    microcode       : 0x15    cpu MHz         : 3101.000    cache size      : 8192 KB    physical id     : 0    siblings        : 4    core id         : 2\n\n    cpu cores       : 4    apicid          : 4    initial apicid  : 4    fpu             : yes    fpu_exception   : yes    cpuid level     : 13    wp              : yes\n    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms\n    bogomips        : 6185.66    clflush size    : 64    cache_alignment : 64    address sizes   : 36 bits physical, 48 bits virtual    power management:\n\n-- Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n    GNU Linux Admin | PHP Senior Web Developer\n    Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n\n    GTalk: [email protected] | Skype: csotelop\n\n    MSN: [email protected] | Yahoo: csotelop    GNULinux RU #379182 | GNULinux RM #277661\n\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n\n-- Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n    GNU Linux Admin | PHP Senior Web Developer\n    Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n    GTalk: [email protected] | Skype: csotelop\n    MSN: [email protected] | Yahoo: csotelop    GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B", "msg_date": "Mon, 30 Sep 2013 10:12:49 -0500", "msg_from": "Carlos Eduardo Sotelo Pinto <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Fwd=3A_Help_on_=E1=B9=95erformance?=" }, { "msg_contents": "Gracias Gilberto\n\nSi, hice esto, pero en el caso del procedimiento almacenado, hay un loop y\nestoy buscando como mejorarlo\n\nsaludos\n\n\nEl 30 de septiembre de 2013 11:25, Gilberto Castillo<\[email protected]> escribió:\n\n>\n>\n> > Necesito una ayuda en el rendimiento postgresql\n> >\n> > He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,\n> > sin embargo, es lento y los recursos cpu son superiores al 120%\n> >\n> > Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando\n> > de\n> > buscar más información en google, pero no es suficiente, también he\n> > tratado\n> > autovacum y reindex db, pero sigue siendo lento\n> >\n> > Mi aplicación es un oyente gps que inserta más de 6.000 registros por\n> > minuto mediante un servidor tcp desarrollado en python twisted, donde no\n> > hay problemas, el problema es cuando trato de seguir los dispositivos GPS\n> > en un mapa en una real time, estoy haciendo consultas cada 6 segundos a\n> mi\n> > base de datos desde mi aplicación django hacia la última posición\n> mediante\n> > un procedimiento almacenado, pero la consulta es lento en más de 50\n> > dispositivos, y la CPU empieza a consumir más del 120% de sus recursos\n> >\n> > Mi Django App se conecta a la base de datos postgres directamente, y el\n> > servidor escucha TCP para dispositivos se conecta a la base de datos con\n> > pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque\n> > no\n> > quiero cruzar la conexión de dispositivos gps en el pgbouncer\n>\n> Uhmmmm te sugiero que pases al conexión también por pgbouncer\n>\n> ¿HA hecho explaing analyze de las consultas?\n>\n> Saludos,\n> Gilberto Castillo\n> La Habana, Cuba\n>\n> ---\n> This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE\n> running at host imx3.etecsa.cu\n> Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com>\n>\n>\n\n\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n GNU Linux Admin | PHP Senior Web Developer\n Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n GTalk: [email protected] | Skype: csotelop\n MSN: [email protected] | Yahoo: csotelop\n GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n\nGracias GilbertoSi, hice esto, pero en el caso del procedimiento almacenado, hay un loop y estoy buscando como mejorarlosaludos\nEl 30 de septiembre de 2013 11:25, Gilberto Castillo<[email protected]> escribió:\n\n\n> Necesito una ayuda en el rendimiento postgresql\n>\n> He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,\n> sin embargo, es lento y los recursos cpu son superiores al 120%\n>\n> Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando\n> de\n> buscar más información en google, pero no es suficiente, también he\n> tratado\n> autovacum y reindex db, pero sigue siendo lento\n>\n> Mi aplicación es un oyente gps que inserta más de 6.000 registros por\n> minuto mediante un servidor tcp desarrollado en python twisted, donde no\n> hay problemas, el problema es cuando trato de seguir los dispositivos GPS\n> en un mapa en una real time, estoy haciendo consultas cada 6 segundos a mi\n> base de datos desde mi aplicación django hacia la última posición mediante\n> un procedimiento almacenado, pero la consulta es lento en más de 50\n> dispositivos, y la CPU empieza a consumir más del 120% de sus recursos\n>\n> Mi Django App se conecta a la base de datos postgres directamente, y el\n> servidor escucha TCP para dispositivos se conecta a la base de datos con\n> pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque\n> no\n> quiero cruzar la conexión de dispositivos gps en el pgbouncer\n\nUhmmmm te sugiero que pases al conexión también por pgbouncer\n\n¿HA hecho explaing analyze de las consultas?\n\nSaludos,\nGilberto Castillo\nLa Habana, Cuba\n---\nThis message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu\nVisit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com>\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n    GNU Linux Admin | PHP Senior Web Developer    Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n    GTalk: [email protected] | Skype: csotelop\n    MSN: [email protected] | Yahoo: csotelop    GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B", "msg_date": "Mon, 30 Sep 2013 10:33:55 -0500", "msg_from": "Carlos Eduardo Sotelo Pinto <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IFtwZ3NxbC1lcy1heXVkYV0gRndkOiBIZWxwIG9uIOG5lWVyZm9ybWFuY2U=?=" }, { "msg_contents": "hola lista\n\nLe sugeriría realizar un diagnostico completo a la plataforma: un\nexcelente libro para este fin es:\nhttp://www.amazon.com/PostgreSQL-High-Performance-Gregory-Smith/dp/184951030X\n\nPero si necesita algo mucho mas inmediato.. encontré este blog con algunos\ntemas interesantes:\n\nhttp://maauso.wordpress.com/2013/09/18/resumen-instalacion-de-postgresql-9-2-optimizada-para-sistemas-de-produccion/\n\n\nEl 30 de septiembre de 2013 11:25, Gilberto Castillo<\[email protected]> escribió:\n\n>\n>\n> > Necesito una ayuda en el rendimiento postgresql\n> >\n> > He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,\n> > sin embargo, es lento y los recursos cpu son superiores al 120%\n> >\n> > Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando\n> > de\n> > buscar más información en google, pero no es suficiente, también he\n> > tratado\n> > autovacum y reindex db, pero sigue siendo lento\n> >\n> > Mi aplicación es un oyente gps que inserta más de 6.000 registros por\n> > minuto mediante un servidor tcp desarrollado en python twisted, donde no\n> > hay problemas, el problema es cuando trato de seguir los dispositivos GPS\n> > en un mapa en una real time, estoy haciendo consultas cada 6 segundos a\n> mi\n> > base de datos desde mi aplicación django hacia la última posición\n> mediante\n> > un procedimiento almacenado, pero la consulta es lento en más de 50\n> > dispositivos, y la CPU empieza a consumir más del 120% de sus recursos\n> >\n> > Mi Django App se conecta a la base de datos postgres directamente, y el\n> > servidor escucha TCP para dispositivos se conecta a la base de datos con\n> > pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque\n> > no\n> > quiero cruzar la conexión de dispositivos gps en el pgbouncer\n>\n> Uhmmmm te sugiero que pases al conexión también por pgbouncer\n>\n> ¿HA hecho explaing analyze de las consultas?\n>\n> Saludos,\n> Gilberto Castillo\n> La Habana, Cuba\n>\n> ---\n> This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE\n> running at host imx3.etecsa.cu\n> Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com>\n>\n> -\n> Enviado a la lista de correo pgsql-es-ayuda ([email protected]\n> )\n> Para cambiar tu suscripción:\n> http://www.postgresql.org/mailpref/pgsql-es-ayuda\n>\n>\n\n\n-- \nCordialmente,\n\nIng. Hellmuth I. Vargas S.\nEsp. Telemática y Negocios por Internet\nOracle Database 10g Administrator Certified Associate\nPostgreSQL DBA\n\nhola listaLe sugeriría realizar un diagnostico completo  a la plataforma: un excelente libro para este fin es: http://www.amazon.com/PostgreSQL-High-Performance-Gregory-Smith/dp/184951030X\nPero si necesita algo mucho mas inmediato.. encontré este blog con algunos temas interesantes:http://maauso.wordpress.com/2013/09/18/resumen-instalacion-de-postgresql-9-2-optimizada-para-sistemas-de-produccion/\nEl 30 de septiembre de 2013 11:25, Gilberto Castillo<[email protected]> escribió:\n\n\n> Necesito una ayuda en el rendimiento postgresql\n>\n> He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,\n> sin embargo, es lento y los recursos cpu son superiores al 120%\n>\n> Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando\n> de\n> buscar más información en google, pero no es suficiente, también he\n> tratado\n> autovacum y reindex db, pero sigue siendo lento\n>\n> Mi aplicación es un oyente gps que inserta más de 6.000 registros por\n> minuto mediante un servidor tcp desarrollado en python twisted, donde no\n> hay problemas, el problema es cuando trato de seguir los dispositivos GPS\n> en un mapa en una real time, estoy haciendo consultas cada 6 segundos a mi\n> base de datos desde mi aplicación django hacia la última posición mediante\n> un procedimiento almacenado, pero la consulta es lento en más de 50\n> dispositivos, y la CPU empieza a consumir más del 120% de sus recursos\n>\n> Mi Django App se conecta a la base de datos postgres directamente, y el\n> servidor escucha TCP para dispositivos se conecta a la base de datos con\n> pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque\n> no\n> quiero cruzar la conexión de dispositivos gps en el pgbouncer\n\nUhmmmm te sugiero que pases al conexión también por pgbouncer\n\n¿HA hecho explaing analyze de las consultas?\n\nSaludos,\nGilberto Castillo\nLa Habana, Cuba\n---\nThis message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu\nVisit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com>\n-\nEnviado a la lista de correo pgsql-es-ayuda ([email protected])\nPara cambiar tu suscripción:\nhttp://www.postgresql.org/mailpref/pgsql-es-ayuda\n-- Cordialmente, Ing. Hellmuth I. Vargas S. Esp. Telemática y Negocios por Internet Oracle Database 10g Administrator Certified Associate\nPostgreSQL DBA", "msg_date": "Mon, 30 Sep 2013 10:36:14 -0500", "msg_from": "Hellmuth Vargas <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IFtwZ3NxbC1lcy1heXVkYV0gUmU6IFtwZ3NxbC1lcy1heXVkYV0gRndkOiBIZWxwIA==?=\n =?UTF-8?B?b24g4bmVZXJmb3JtYW5jZQ==?=" }, { "msg_contents": "Carlos Eduardo Sotelo Pinto escribi�:\n\n> DECLARE\n> arr varchar[];\n> BEGIN\n> arr := regexp_split_to_array(_imeis, E'\\\\s+');\n> FOR i IN 1..array_length(arr, 1) LOOP\n> RETURN QUERY\n\nCreo que deber�as hacer una �nica consulta con todos los elementos del\narray, en lugar de una consulta para cada elemento. Es decir, elimina\nel LOOP y el LIMIT 1, y tu WHERE deber�a ser algo como\n\n...\n\n> FROM gpstracking_device_tracks\n> WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n\nWHERE gpstracking_device_tracks.imei = ANY (arr) AND ...\n\nVas a tener que solucionar de otra forma el que te retorne s�lo una fila\npara cada imei, claro.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n-\nEnviado a la lista de correo pgsql-es-ayuda ([email protected])\nPara cambiar tu suscripci�n:\nhttp://www.postgresql.org/mailpref/pgsql-es-ayuda\n", "msg_date": "Mon, 30 Sep 2013 13:12:20 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Help =?utf-8?Q?o?=\n =?utf-8?B?biDhuZVlcmZvcm1hbmNl?=" }, { "msg_contents": "Hola Alvaro\n\nHe solucionado parcialmente el problema haciendo una consulta del tipo\n\narr := regexp_split_to_array(_imeis, E'\\\\s+');\n RETURN QUERY\nSELECT\ngpstracking_device_tracks.....\nFROM (\nSELECT\ngpstracking_device_tracks......\nROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY\ngpstracking_device_tracks.date_time_process DESC) as rnumber\nFROM gpstracking_device_tracks\nWHERE gpstracking_device_tracks.imei = ANY(arr)\nAND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\nnow())\nAND gpstracking_device_tracks.date_time_process <= NOW()\n) AS gpstracking_device_tracks\nWHERE gpstracking_device_tracks.rnumber = 1;\n\nY ahora estoy leyendo un poco de tuning, ya que no soy un dba ni menos un\nexperto en optimización, pero espero pueda mejorar aun más el rendimiento\n\nMuchas gracias por la ayuda de todos\n\n\n\n\nEl 30 de septiembre de 2013 11:12, Alvaro\nHerrera<[email protected]>escribió:\n\n> Carlos Eduardo Sotelo Pinto escribió:\n>\n> > DECLARE\n> > arr varchar[];\n> > BEGIN\n> > arr := regexp_split_to_array(_imeis, E'\\\\s+');\n> > FOR i IN 1..array_length(arr, 1) LOOP\n> > RETURN QUERY\n>\n> Creo que deberías hacer una única consulta con todos los elementos del\n> array, en lugar de una consulta para cada elemento. Es decir, elimina\n> el LOOP y el LIMIT 1, y tu WHERE debería ser algo como\n>\n> ...\n>\n> > FROM gpstracking_device_tracks\n> > WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n>\n> WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...\n>\n> Vas a tener que solucionar de otra forma el que te retorne sólo una fila\n> para cada imei, claro.\n>\n> --\n> Álvaro Herrera http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\n\n\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n GNU Linux Admin | PHP Senior Web Developer\n Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n GTalk: [email protected] | Skype: csotelop\n MSN: [email protected] | Yahoo: csotelop\n GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n\nHola AlvaroHe solucionado parcialmente el problema haciendo una consulta del tipo arr := regexp_split_to_array(_imeis, E'\\\\s+'); \n RETURN QUERY  SELECT gpstracking_device_tracks.....\n FROM ( SELECT  gpstracking_device_tracks......\n ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY gpstracking_device_tracks.date_time_process DESC) as rnumber FROM gpstracking_device_tracks \n WHERE gpstracking_device_tracks.imei = ANY(arr) AND gpstracking_device_tracks.date_time_process >= date_trunc('hour', now()) \n AND gpstracking_device_tracks.date_time_process <= NOW() ) AS gpstracking_device_tracks WHERE gpstracking_device_tracks.rnumber = 1;\nY ahora estoy leyendo un poco de tuning, ya que no soy un dba ni menos un experto en optimización, pero espero pueda mejorar aun más el rendimientoMuchas gracias por la ayuda de todos\nEl 30 de septiembre de 2013 11:12, Alvaro Herrera<[email protected]> escribió:\nCarlos Eduardo Sotelo Pinto escribió:\n\n>     DECLARE\n>     arr varchar[];\n>     BEGIN\n>         arr := regexp_split_to_array(_imeis, E'\\\\s+');\n>     FOR i IN 1..array_length(arr, 1) LOOP\n>     RETURN QUERY\n\nCreo que deberías hacer una única consulta con todos los elementos del\narray, en lugar de una consulta para cada elemento.  Es decir, elimina\nel LOOP y el LIMIT 1, y tu WHERE debería ser algo como\n\n...\n\n>     FROM gpstracking_device_tracks\n>     WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n\nWHERE gpstracking_device_tracks.imei = ANY (arr) AND ...\n\nVas a tener que solucionar de otra forma el que te retorne sólo una fila\npara cada imei, claro.\n\n--\nÁlvaro Herrera                http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n    GNU Linux Admin | PHP Senior Web Developer    Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n    GTalk: [email protected] | Skype: csotelop\n    MSN: [email protected] | Yahoo: csotelop    GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B", "msg_date": "Mon, 30 Sep 2013 11:16:33 -0500", "msg_from": "Carlos Eduardo Sotelo Pinto <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IFtwZ3NxbC1lcy1heXVkYV0gRndkOiBIZWxwIG9uIOG5lWVyZm9ybWFuY2U=?=" }, { "msg_contents": "> Necesito una ayuda en el rendimiento postgresql\n>\n> He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,\n> sin embargo, es lento y los recursos cpu son superiores al 120%\n>\n> Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando\n> de\n> buscar más información en google, pero no es suficiente, también he\n> tratado\n> autovacum y reindex db, pero sigue siendo lento\n>\n> Mi aplicación es un oyente gps que inserta más de 6.000 registros por\n> minuto mediante un servidor tcp desarrollado en python twisted, donde no\n> hay problemas, el problema es cuando trato de seguir los dispositivos GPS\n> en un mapa en una real time, estoy haciendo consultas cada 6 segundos a mi\n> base de datos desde mi aplicación django hacia la última posición mediante\n> un procedimiento almacenado, pero la consulta es lento en más de 50\n> dispositivos, y la CPU empieza a consumir más del 120% de sus recursos\n>\n> Mi Django App se conecta a la base de datos postgres directamente, y el\n> servidor escucha TCP para dispositivos se conecta a la base de datos con\n> pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque\n> no\n> quiero cruzar la conexión de dispositivos gps en el pgbouncer\n\nUhmmmm te sugiero que pases al conexión también por pgbouncer\n\n¿HA hecho explaing analyze de las consultas?\n\nSaludos,\nGilberto Castillo\nLa Habana, Cuba\n\n--- \nThis message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu\nVisit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com>\n\n-\nEnviado a la lista de correo pgsql-es-ayuda ([email protected])\nPara cambiar tu suscripci�n:\nhttp://www.postgresql.org/mailpref/pgsql-es-ayuda", "msg_date": "Mon, 30 Sep 2013 11:25:29 -0500 (GMT+5)", "msg_from": "\"Gilberto Castillo\" <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?utf-8?B?UmU6IFtwZ3NxbC1lcy1heXVkYV0gRndkOiBIZWxwIG9uIOG5lWVyZm9ybWFu?=\n =?utf-8?B?Y2U=?=" }, { "msg_contents": "El 30/09/2013 01:16 p.m., Carlos Eduardo Sotelo Pinto escribió:\n> Hola Alvaro\n>\n> He solucionado parcialmente el problema haciendo una consulta del tipo\n>\n> arr := regexp_split_to_array(_imeis, E'\\\\s+');\n> RETURN QUERY\n> SELECT\n> gpstracking_device_tracks.....\n> FROM (\n> SELECT\n> gpstracking_device_tracks......\n> ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY \n> gpstracking_device_tracks.date_time_process DESC) as rnumber\n> FROM gpstracking_device_tracks\n> WHERE gpstracking_device_tracks.imei = ANY(arr)\n> AND gpstracking_device_tracks.date_time_process >= date_trunc('hour', \n> now())\n> AND gpstracking_device_tracks.date_time_process <= NOW()\n> ) AS gpstracking_device_tracks\n> WHERE gpstracking_device_tracks.rnumber = 1;\n>\n> Y ahora estoy leyendo un poco de tuning, ya que no soy un dba ni menos \n> un experto en optimización, pero espero pueda mejorar aun más el \n> rendimiento\n>\n> Muchas gracias por la ayuda de todos\n>\n>\n>\n>\n> El 30 de septiembre de 2013 11:12, Alvaro \n> Herrera<[email protected] <mailto:[email protected]>> \n> escribió:\n>\n> Carlos Eduardo Sotelo Pinto escribió:\n>\n> > DECLARE\n> > arr varchar[];\n> > BEGIN\n> > arr := regexp_split_to_array(_imeis, E'\\\\s+');\n> > FOR i IN 1..array_length(arr, 1) LOOP\n> > RETURN QUERY\n>\n> Creo que deberías hacer una única consulta con todos los elementos del\n> array, en lugar de una consulta para cada elemento. Es decir, elimina\n> el LOOP y el LIMIT 1, y tu WHERE debería ser algo como\n>\n> ...\n>\n> > FROM gpstracking_device_tracks\n> > WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n>\n> WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...\n>\n> Vas a tener que solucionar de otra forma el que te retorne sólo\n> una fila\n> para cada imei, claro.\n>\n> --\n> Álvaro Herrera http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n>\n>\n>\n> -- \n> Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n> GNU Linux Admin | PHP Senior Web Developer\n> Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n> GTalk: [email protected] \n> <mailto:[email protected]> | Skype: csotelop\n> MSN: [email protected] \n> <mailto:[email protected]> | Yahoo: csotelop\n> GNULinux RU #379182 | GNULinux RM #277661\n> GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\nque indices tenes en la tabla?, como están armados?\nla tabla tiene históricos o la depuraras cada cierto tiempo?\n\n\nSaludos Fernando\n\n\n\n\n\n\nEl 30/09/2013 01:16 p.m., Carlos\n Eduardo Sotelo Pinto escribió:\n\n\nHola Alvaro\n \n\nHe solucionado parcialmente el problema haciendo una\n consulta del tipo \n\n\n\narr := regexp_split_to_array(_imeis, E'\\\\s+');\n \n RETURN\n QUERY \n SELECT\n gpstracking_device_tracks.....\n FROM ( \n SELECT \n gpstracking_device_tracks......\n ROW_NUMBER()\n OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY\n gpstracking_device_tracks.date_time_process DESC) as rnumber\n FROM\n gpstracking_device_tracks \n WHERE\n gpstracking_device_tracks.imei = ANY(arr)\n AND\n gpstracking_device_tracks.date_time_process >=\n date_trunc('hour', now()) \n AND\n gpstracking_device_tracks.date_time_process <= NOW()\n ) AS\n gpstracking_device_tracks\n WHERE\n gpstracking_device_tracks.rnumber = 1;\n\n\n\nY ahora estoy leyendo un poco de tuning, ya que no soy un\n dba ni menos un experto en optimización, pero espero pueda\n mejorar aun más el rendimiento\n\n\nMuchas gracias por la ayuda de todos\n\n\n\n\n\n\n\nEl 30 de septiembre de 2013 11:12,\n Alvaro Herrera<[email protected]>\n escribió:\nCarlos\n Eduardo Sotelo Pinto escribió:\n\n >     DECLARE\n >     arr varchar[];\n >     BEGIN\n >         arr := regexp_split_to_array(_imeis,\n E'\\\\s+');\n >     FOR i IN 1..array_length(arr, 1) LOOP\n >     RETURN QUERY\n\n\n Creo que deberías hacer una única consulta con todos los\n elementos del\n array, en lugar de una consulta para cada elemento.  Es\n decir, elimina\n el LOOP y el LIMIT 1, y tu WHERE debería ser algo como\n\n ...\n\n >     FROM gpstracking_device_tracks\n >     WHERE gpstracking_device_tracks.imei =\n arr[i]::VARCHAR\n\n\n WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...\n\n Vas a tener que solucionar de otra forma el que te retorne\n sólo una fila\n para cada imei, claro.\n\n --\n Álvaro Herrera                http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training &\n Services\n\n\n\n\n\n\n -- \n\n\n Carlos Eduardo Sotelo Pinto | http://carlossotelo.com\n | csotelo@twitter\n\n\n     GNU Linux Admin | PHP Senior Web Developer\n    Mobil: RPC (Claro)+51, 958194614 | Mov: +51,\n 959980794\n\n\n     GTalk: [email protected] | Skype:\n csotelop\n\n\n     MSN: [email protected] |\n Yahoo: csotelop\n    GNULinux RU #379182 | GNULinux RM #277661\n\n\n GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855\n 4F6B\n\n\n\n que indices tenes en la tabla?, como están armados?\n la tabla tiene históricos o la depuraras cada cierto tiempo?\n\n\n Saludos Fernando", "msg_date": "Mon, 30 Sep 2013 13:31:30 -0300", "msg_from": "Rodriguez Fernando <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IFtwZ3NxbC1lcy1heXVkYV0gUmU6IFtwZ3NxbC1lcy1heXVkYV0=?=\n =?UTF-8?B?IEZ3ZDogSGVscCBvbiDhuZVlcmZvcm1hbmNl?=" }, { "msg_contents": "Hola Fernando\n\nNo soy experto en el tema, por ahora trato de llevarme de lo que encuentro\n\n\n - La tabla esta particionada por meses\n - indices en el imei y fecha\n - no se manejo de historicos\n\nsaludos\n\n\nEl 30 de septiembre de 2013 11:31, Rodriguez\nFernando<[email protected]>escribió:\n\n> El 30/09/2013 01:16 p.m., Carlos Eduardo Sotelo Pinto escribió:\n>\n> Hola Alvaro\n>\n> He solucionado parcialmente el problema haciendo una consulta del tipo\n>\n> arr := regexp_split_to_array(_imeis, E'\\\\s+');\n> RETURN QUERY\n> SELECT\n> gpstracking_device_tracks.....\n> FROM (\n> SELECT\n> gpstracking_device_tracks......\n> ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY\n> gpstracking_device_tracks.date_time_process DESC) as rnumber\n> FROM gpstracking_device_tracks\n> WHERE gpstracking_device_tracks.imei = ANY(arr)\n> AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\n> now())\n> AND gpstracking_device_tracks.date_time_process <= NOW()\n> ) AS gpstracking_device_tracks\n> WHERE gpstracking_device_tracks.rnumber = 1;\n>\n> Y ahora estoy leyendo un poco de tuning, ya que no soy un dba ni menos\n> un experto en optimización, pero espero pueda mejorar aun más el rendimiento\n>\n> Muchas gracias por la ayuda de todos\n>\n>\n>\n>\n> El 30 de septiembre de 2013 11:12, Alvaro Herrera<[email protected]\n> > escribió:\n>\n>> Carlos Eduardo Sotelo Pinto escribió:\n>>\n>> > DECLARE\n>> > arr varchar[];\n>> > BEGIN\n>> > arr := regexp_split_to_array(_imeis, E'\\\\s+');\n>> > FOR i IN 1..array_length(arr, 1) LOOP\n>> > RETURN QUERY\n>>\n>> Creo que deberías hacer una única consulta con todos los elementos del\n>> array, en lugar de una consulta para cada elemento. Es decir, elimina\n>> el LOOP y el LIMIT 1, y tu WHERE debería ser algo como\n>>\n>> ...\n>>\n>> > FROM gpstracking_device_tracks\n>> > WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n>>\n>> WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...\n>>\n>> Vas a tener que solucionar de otra forma el que te retorne sólo una fila\n>> para cada imei, claro.\n>>\n>> --\n>> Álvaro Herrera http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Training & Services\n>>\n>\n>\n>\n> --\n> Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n> GNU Linux Admin | PHP Senior Web Developer\n> Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n> GTalk: [email protected] | Skype: csotelop\n> MSN: [email protected] | Yahoo: csotelop\n> GNULinux RU #379182 | GNULinux RM #277661\n> GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n>\n> que indices tenes en la tabla?, como están armados?\n> la tabla tiene históricos o la depuraras cada cierto tiempo?\n>\n>\n> Saludos Fernando\n>\n\n\n\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n GNU Linux Admin | PHP Senior Web Developer\n Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n GTalk: [email protected] | Skype: csotelop\n MSN: [email protected] | Yahoo: csotelop\n GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n\nHola FernandoNo soy experto en el tema, por ahora trato de llevarme de lo que encuentroLa tabla esta particionada por mesesindices en el imei y fecha\nno se manejo de historicossaludosEl 30 de septiembre de 2013 11:31, Rodriguez Fernando<[email protected]> escribió:\n\n\nEl 30/09/2013 01:16 p.m., Carlos\n Eduardo Sotelo Pinto escribió:\n\n\nHola Alvaro\n \n\nHe solucionado parcialmente el problema haciendo una\n consulta del tipo \n\n\n\narr := regexp_split_to_array(_imeis, E'\\\\s+');\n \n RETURN\n QUERY \n SELECT\n gpstracking_device_tracks.....\n FROM ( \n SELECT \n gpstracking_device_tracks......\n ROW_NUMBER()\n OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY\n gpstracking_device_tracks.date_time_process DESC) as rnumber\n FROM\n gpstracking_device_tracks \n WHERE\n gpstracking_device_tracks.imei = ANY(arr)\n AND\n gpstracking_device_tracks.date_time_process >=\n date_trunc('hour', now()) \n AND\n gpstracking_device_tracks.date_time_process <= NOW()\n ) AS\n gpstracking_device_tracks\n WHERE\n gpstracking_device_tracks.rnumber = 1;\n\n\n\nY ahora estoy leyendo un poco de tuning, ya que no soy un\n dba ni menos un experto en optimización, pero espero pueda\n mejorar aun más el rendimiento\n\n\nMuchas gracias por la ayuda de todos\n\n\n\n\n\n\n\nEl 30 de septiembre de 2013 11:12,\n Alvaro Herrera<[email protected]>\n escribió:\nCarlos\n Eduardo Sotelo Pinto escribió:\n\n >     DECLARE\n >     arr varchar[];\n >     BEGIN\n >         arr := regexp_split_to_array(_imeis,\n E'\\\\s+');\n >     FOR i IN 1..array_length(arr, 1) LOOP\n >     RETURN QUERY\n\n\n Creo que deberías hacer una única consulta con todos los\n elementos del\n array, en lugar de una consulta para cada elemento.  Es\n decir, elimina\n el LOOP y el LIMIT 1, y tu WHERE debería ser algo como\n\n ...\n\n >     FROM gpstracking_device_tracks\n >     WHERE gpstracking_device_tracks.imei =\n arr[i]::VARCHAR\n\n\n WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...\n\n Vas a tener que solucionar de otra forma el que te retorne\n sólo una fila\n para cada imei, claro.\n\n --\n Álvaro Herrera                http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training &\n Services\n\n\n\n\n\n\n -- \n\n\n Carlos Eduardo Sotelo Pinto | http://carlossotelo.com\n | csotelo@twitter\n\n\n     GNU Linux Admin | PHP Senior Web Developer\n    Mobil: RPC (Claro)+51, 958194614 | Mov: +51,\n 959980794\n\n\n     GTalk: [email protected] | Skype:\n csotelop\n\n\n     MSN: [email protected] |\n Yahoo: csotelop\n    GNULinux RU #379182 | GNULinux RM #277661\n\n\n GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855\n 4F6B\n\n\n\n que indices tenes en la tabla?, como están armados?\n la tabla tiene históricos o la depuraras cada cierto tiempo?\n\n\n Saludos Fernando\n\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n    GNU Linux Admin | PHP Senior Web Developer    Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n    GTalk: [email protected] | Skype: csotelop\n    MSN: [email protected] | Yahoo: csotelop    GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B", "msg_date": "Mon, 30 Sep 2013 11:35:08 -0500", "msg_from": "Carlos Eduardo Sotelo Pinto <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IFtwZ3NxbC1lcy1heXVkYV0gUmU6IFtwZ3NxbC1lcy1heXVkYV0gUmU6IFtwZ3NxbA==?=\n =?UTF-8?B?LWVzLWF5dWRhXSBGd2Q6IEhlbHAgb24g4bmVZXJmb3JtYW5jZQ==?=" }, { "msg_contents": "On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto\n<[email protected]> wrote:\n>\n> I need a help on postgresql performance\n>\n> I have configurate my postgresql files for tunning my server, however it is\n> slow and cpu resources are highter than 120%\n>\n> I have no idea on how to solve this issue, I was trying to search more infor\n> on google but is not enough, I also have try autovacum sentences and reindex\n> db, but it continues beeing slow\n>\n> My app is a gps listener that insert more than 6000 records per minutes\n> using a tcp server developed on python twisted, where there is no problems,\n> the problem is when I try to follow the gps devices on a map on a relatime,\n> I am doing queries each 6 seconds to my database from my django app, for\n> request last position using a stored procedure, but the query get slow on\n> more than 50 devices and cpu start to using more than 120% of its resources\n>\n> Django App connect the postgres database directly, and tcp listener server\n> for teh devices connect database on threaded way using pgbouncer, I have not\n> using my django web app on pgbouncer caause I dont want to crash gps devices\n> connection on the pgbouncer\n>\n> I hoe you could help on get a better performance\n>\n> I am attaching my store procedure, my conf files and my cpu, memory\n> information\n>\n> **Stored procedure**\n>\n> CREATE OR REPLACE FUNCTION gps_get_live_location (\n> _imeis varchar(8)\n> )\n> RETURNS TABLE (\n> imei varchar,\n> device_id integer,\n> date_time_process timestamp with time zone,\n> latitude double precision,\n> longitude double precision,\n> course smallint,\n> speed smallint,\n> mileage integer,\n> gps_signal smallint,\n> gsm_signal smallint,\n> alarm_status boolean,\n> gsm_status boolean,\n> vehicle_status boolean,\n> alarm_over_speed boolean,\n> other text,\n> address varchar\n> ) AS $func$\n> DECLARE\n> arr varchar[];\n> BEGIN\n> arr := regexp_split_to_array(_imeis, E'\\\\s+');\n> FOR i IN 1..array_length(arr, 1) LOOP\n> RETURN QUERY\n> SELECT\n> gpstracking_device_tracks.imei,\n> gpstracking_device_tracks.device_id,\n> gpstracking_device_tracks.date_time_process,\n> gpstracking_device_tracks.latitude,\n> gpstracking_device_tracks.longitude,\n> gpstracking_device_tracks.course,\n> gpstracking_device_tracks.speed,\n> gpstracking_device_tracks.mileage,\n> gpstracking_device_tracks.gps_signal,\n> gpstracking_device_tracks.gsm_signal,\n> gpstracking_device_tracks.alarm_status,\n> gpstracking_device_tracks.gps_status,\n> gpstracking_device_tracks.vehicle_status,\n> gpstracking_device_tracks.alarm_over_speed,\n> gpstracking_device_tracks.other,\n> gpstracking_device_tracks.address\n> FROM gpstracking_device_tracks\n> WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n> AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\n> now())\n> AND gpstracking_device_tracks.date_time_process <= NOW()\n> ORDER BY gpstracking_device_tracks.date_time_process DESC\n> LIMIT 1;\n> END LOOP;\n> RETURN;\n> END;\n> $func$\n> LANGUAGE plpgsql VOLATILE SECURITY DEFINER;\n\n\nWhy are you doing this in a loop? What's the point of the LIMIT 1?\nYou can almost certainly refactor this procedure into a vanilla query.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Oct 2013 08:57:52 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3A_=5BGENERAL=5D_Help_on_=E1=B9=95erformance?=" }, { "msg_contents": "Thanks to all\n\nI have fix that refactoring the function\n\nBEGIN\n arr := regexp_split_to_array(_imeis, E'\\\\s+');\n RETURN QUERY\nSELECT\ngpstracking_device_tracks.imei,\ngpstracking_device_tracks.device_id,\ngpstracking_device_tracks.date_time_process,\ngpstracking_device_tracks.latitude,\ngpstracking_device_tracks.longitude,\ngpstracking_device_tracks.course,\ngpstracking_device_tracks.speed,\ngpstracking_device_tracks.mileage,\ngpstracking_device_tracks.gps_signal,\ngpstracking_device_tracks.gsm_signal,\ngpstracking_device_tracks.alarm_status,\ngpstracking_device_tracks.gps_status,\ngpstracking_device_tracks.vehicle_status,\ngpstracking_device_tracks.alarm_over_speed,\ngpstracking_device_tracks.other,\ngpstracking_device_tracks.address\nFROM (\nSELECT\ngpstracking_device_tracks.imei,\ngpstracking_device_tracks.device_id,\ngpstracking_device_tracks.date_time_process,\ngpstracking_device_tracks.latitude,\ngpstracking_device_tracks.longitude,\ngpstracking_device_tracks.course,\ngpstracking_device_tracks.speed,\ngpstracking_device_tracks.mileage,\ngpstracking_device_tracks.gps_signal,\ngpstracking_device_tracks.gsm_signal,\ngpstracking_device_tracks.alarm_status,\ngpstracking_device_tracks.gps_status,\ngpstracking_device_tracks.vehicle_status,\ngpstracking_device_tracks.alarm_over_speed,\ngpstracking_device_tracks.other,\ngpstracking_device_tracks.address,\nROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY\ngpstracking_device_tracks.date_time_process DESC) as rnumber\nFROM gpstracking_device_tracks\nWHERE gpstracking_device_tracks.imei = ANY(arr)\nAND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\nnow())\nAND gpstracking_device_tracks.date_time_process <= NOW()\n) AS gpstracking_device_tracks\nWHERE gpstracking_device_tracks.rnumber = 1;\nEND;\n\n\n2013/10/2 Merlin Moncure <[email protected]>\n\n> On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto\n> <[email protected]> wrote:\n> >\n> > I need a help on postgresql performance\n> >\n> > I have configurate my postgresql files for tunning my server, however it\n> is\n> > slow and cpu resources are highter than 120%\n> >\n> > I have no idea on how to solve this issue, I was trying to search more\n> infor\n> > on google but is not enough, I also have try autovacum sentences and\n> reindex\n> > db, but it continues beeing slow\n> >\n> > My app is a gps listener that insert more than 6000 records per minutes\n> > using a tcp server developed on python twisted, where there is no\n> problems,\n> > the problem is when I try to follow the gps devices on a map on a\n> relatime,\n> > I am doing queries each 6 seconds to my database from my django app, for\n> > request last position using a stored procedure, but the query get slow on\n> > more than 50 devices and cpu start to using more than 120% of its\n> resources\n> >\n> > Django App connect the postgres database directly, and tcp listener\n> server\n> > for teh devices connect database on threaded way using pgbouncer, I have\n> not\n> > using my django web app on pgbouncer caause I dont want to crash gps\n> devices\n> > connection on the pgbouncer\n> >\n> > I hoe you could help on get a better performance\n> >\n> > I am attaching my store procedure, my conf files and my cpu, memory\n> > information\n> >\n> > **Stored procedure**\n> >\n> > CREATE OR REPLACE FUNCTION gps_get_live_location (\n> > _imeis varchar(8)\n> > )\n> > RETURNS TABLE (\n> > imei varchar,\n> > device_id integer,\n> > date_time_process timestamp with time zone,\n> > latitude double precision,\n> > longitude double precision,\n> > course smallint,\n> > speed smallint,\n> > mileage integer,\n> > gps_signal smallint,\n> > gsm_signal smallint,\n> > alarm_status boolean,\n> > gsm_status boolean,\n> > vehicle_status boolean,\n> > alarm_over_speed boolean,\n> > other text,\n> > address varchar\n> > ) AS $func$\n> > DECLARE\n> > arr varchar[];\n> > BEGIN\n> > arr := regexp_split_to_array(_imeis, E'\\\\s+');\n> > FOR i IN 1..array_length(arr, 1) LOOP\n> > RETURN QUERY\n> > SELECT\n> > gpstracking_device_tracks.imei,\n> > gpstracking_device_tracks.device_id,\n> > gpstracking_device_tracks.date_time_process,\n> > gpstracking_device_tracks.latitude,\n> > gpstracking_device_tracks.longitude,\n> > gpstracking_device_tracks.course,\n> > gpstracking_device_tracks.speed,\n> > gpstracking_device_tracks.mileage,\n> > gpstracking_device_tracks.gps_signal,\n> > gpstracking_device_tracks.gsm_signal,\n> > gpstracking_device_tracks.alarm_status,\n> > gpstracking_device_tracks.gps_status,\n> > gpstracking_device_tracks.vehicle_status,\n> > gpstracking_device_tracks.alarm_over_speed,\n> > gpstracking_device_tracks.other,\n> > gpstracking_device_tracks.address\n> > FROM gpstracking_device_tracks\n> > WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n> > AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\n> > now())\n> > AND gpstracking_device_tracks.date_time_process <= NOW()\n> > ORDER BY gpstracking_device_tracks.date_time_process DESC\n> > LIMIT 1;\n> > END LOOP;\n> > RETURN;\n> > END;\n> > $func$\n> > LANGUAGE plpgsql VOLATILE SECURITY DEFINER;\n>\n>\n> Why are you doing this in a loop? What's the point of the LIMIT 1?\n> You can almost certainly refactor this procedure into a vanilla query.\n>\n> merlin\n>\n\n\n\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n GNU Linux Admin | PHP Senior Web Developer\n Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n GTalk: [email protected] | Skype: csotelop\n MSN: [email protected] | Yahoo: csotelop\n GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n\nThanks to allI have fix that refactoring the function BEGIN    arr := regexp_split_to_array(_imeis, E'\\\\s+'); \n RETURN QUERY  SELECT gpstracking_device_tracks.imei,\n gpstracking_device_tracks.device_id,  gpstracking_device_tracks.date_time_process, gpstracking_device_tracks.latitude,\n gpstracking_device_tracks.longitude, gpstracking_device_tracks.course, gpstracking_device_tracks.speed,\n gpstracking_device_tracks.mileage, gpstracking_device_tracks.gps_signal, gpstracking_device_tracks.gsm_signal,\n gpstracking_device_tracks.alarm_status, gpstracking_device_tracks.gps_status, gpstracking_device_tracks.vehicle_status,\n gpstracking_device_tracks.alarm_over_speed, gpstracking_device_tracks.other, gpstracking_device_tracks.address\n FROM ( SELECT  gpstracking_device_tracks.imei,\n gpstracking_device_tracks.device_id,  gpstracking_device_tracks.date_time_process, gpstracking_device_tracks.latitude,\n gpstracking_device_tracks.longitude, gpstracking_device_tracks.course, gpstracking_device_tracks.speed,\n gpstracking_device_tracks.mileage, gpstracking_device_tracks.gps_signal, gpstracking_device_tracks.gsm_signal,\n gpstracking_device_tracks.alarm_status, gpstracking_device_tracks.gps_status, gpstracking_device_tracks.vehicle_status,\n gpstracking_device_tracks.alarm_over_speed, gpstracking_device_tracks.other, gpstracking_device_tracks.address,\n ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY gpstracking_device_tracks.date_time_process DESC) as rnumber FROM gpstracking_device_tracks \n WHERE gpstracking_device_tracks.imei = ANY(arr) AND gpstracking_device_tracks.date_time_process >= date_trunc('hour', now()) \n AND gpstracking_device_tracks.date_time_process <= NOW() ) AS gpstracking_device_tracks WHERE gpstracking_device_tracks.rnumber = 1;\nEND;2013/10/2 Merlin Moncure <[email protected]>\nOn Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto\n<[email protected]> wrote:\n>\n> I need a help on postgresql performance\n>\n> I have configurate my postgresql files for tunning my server, however it is\n> slow and cpu resources are highter than 120%\n>\n> I have no idea on how to solve this issue, I was trying to search more infor\n> on google but is not enough, I also have try autovacum sentences and reindex\n> db, but it continues beeing slow\n>\n> My app is a gps listener that insert more than 6000 records per minutes\n> using a tcp server developed on python twisted, where there is no problems,\n> the problem is when I try to follow the gps devices on a map on a relatime,\n> I am doing queries each 6 seconds to my database from my django app, for\n> request last position using a stored procedure, but the query get slow on\n> more than 50 devices and cpu start to using more than 120% of its resources\n>\n> Django App connect the postgres database directly, and tcp listener server\n> for teh devices connect database on threaded way using pgbouncer, I have not\n> using my django web app on pgbouncer caause I dont want to crash gps devices\n> connection on the pgbouncer\n>\n> I hoe you could help on get a better performance\n>\n> I am attaching my store procedure, my conf files and my cpu, memory\n> information\n>\n> **Stored procedure**\n>\n>     CREATE OR REPLACE FUNCTION gps_get_live_location (\n>     _imeis varchar(8)\n>     )\n>     RETURNS TABLE (\n>     imei varchar,\n>     device_id integer,\n>     date_time_process timestamp with time zone,\n>     latitude double precision,\n>     longitude double precision,\n>     course smallint,\n>     speed smallint,\n>     mileage integer,\n>     gps_signal smallint,\n>     gsm_signal smallint,\n>     alarm_status boolean,\n>     gsm_status boolean,\n>     vehicle_status boolean,\n>     alarm_over_speed boolean,\n>     other text,\n>     address varchar\n>     ) AS $func$\n>     DECLARE\n>     arr varchar[];\n>     BEGIN\n>         arr := regexp_split_to_array(_imeis, E'\\\\s+');\n>     FOR i IN 1..array_length(arr, 1) LOOP\n>     RETURN QUERY\n>     SELECT\n>     gpstracking_device_tracks.imei,\n>     gpstracking_device_tracks.device_id,\n>     gpstracking_device_tracks.date_time_process,\n>     gpstracking_device_tracks.latitude,\n>     gpstracking_device_tracks.longitude,\n>     gpstracking_device_tracks.course,\n>     gpstracking_device_tracks.speed,\n>     gpstracking_device_tracks.mileage,\n>     gpstracking_device_tracks.gps_signal,\n>     gpstracking_device_tracks.gsm_signal,\n>     gpstracking_device_tracks.alarm_status,\n>     gpstracking_device_tracks.gps_status,\n>     gpstracking_device_tracks.vehicle_status,\n>     gpstracking_device_tracks.alarm_over_speed,\n>     gpstracking_device_tracks.other,\n>     gpstracking_device_tracks.address\n>     FROM gpstracking_device_tracks\n>     WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n>     AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\n> now())\n>     AND gpstracking_device_tracks.date_time_process <= NOW()\n>     ORDER BY gpstracking_device_tracks.date_time_process DESC\n>     LIMIT 1;\n>     END LOOP;\n>     RETURN;\n>     END;\n>     $func$\n>     LANGUAGE plpgsql VOLATILE SECURITY DEFINER;\n\n\nWhy are you doing this in a loop?  What's the point of the LIMIT 1?\nYou can almost certainly refactor this procedure into a vanilla query.\n\nmerlin\n-- \nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n    GNU Linux Admin | PHP Senior Web Developer    Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n    GTalk: [email protected] | Skype: csotelop\n    MSN: [email protected] | Yahoo: csotelop    GNULinux RU #379182 | GNULinux RM #277661\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B", "msg_date": "Wed, 2 Oct 2013 11:22:27 -0500", "msg_from": "Carlos Eduardo Sotelo Pinto <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Re=3A_=5BGENERAL=5D_Help_on_=E1=B9=95erformance?=" }, { "msg_contents": "Hey short trick :\nto avoid to use the schema name multiple time (more readable and more easy\nto re use).\n\nYou can use the\nSET search_path gpstracking_device_tracks, public;\n\n(see manual here :\nhttp://www.postgresql.org/docs/current/static/sql-set.html)\nCheers,\n\nRémi-C\n\n\n2013/10/2 Carlos Eduardo Sotelo Pinto <[email protected]>\n\n> Thanks to all\n>\n> I have fix that refactoring the function\n>\n> BEGIN\n> arr := regexp_split_to_array(_imeis, E'\\\\s+');\n> RETURN QUERY\n> SELECT\n> gpstracking_device_tracks.imei,\n> gpstracking_device_tracks.device_id,\n> gpstracking_device_tracks.date_time_process,\n> gpstracking_device_tracks.latitude,\n> gpstracking_device_tracks.longitude,\n> gpstracking_device_tracks.course,\n> gpstracking_device_tracks.speed,\n> gpstracking_device_tracks.mileage,\n> gpstracking_device_tracks.gps_signal,\n> gpstracking_device_tracks.gsm_signal,\n> gpstracking_device_tracks.alarm_status,\n> gpstracking_device_tracks.gps_status,\n> gpstracking_device_tracks.vehicle_status,\n> gpstracking_device_tracks.alarm_over_speed,\n> gpstracking_device_tracks.other,\n> gpstracking_device_tracks.address\n> FROM (\n> SELECT\n> gpstracking_device_tracks.imei,\n> gpstracking_device_tracks.device_id,\n> gpstracking_device_tracks.date_time_process,\n> gpstracking_device_tracks.latitude,\n> gpstracking_device_tracks.longitude,\n> gpstracking_device_tracks.course,\n> gpstracking_device_tracks.speed,\n> gpstracking_device_tracks.mileage,\n> gpstracking_device_tracks.gps_signal,\n> gpstracking_device_tracks.gsm_signal,\n> gpstracking_device_tracks.alarm_status,\n> gpstracking_device_tracks.gps_status,\n> gpstracking_device_tracks.vehicle_status,\n> gpstracking_device_tracks.alarm_over_speed,\n> gpstracking_device_tracks.other,\n> gpstracking_device_tracks.address,\n> ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY\n> gpstracking_device_tracks.date_time_process DESC) as rnumber\n> FROM gpstracking_device_tracks\n> WHERE gpstracking_device_tracks.imei = ANY(arr)\n> AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\n> now())\n> AND gpstracking_device_tracks.date_time_process <= NOW()\n> ) AS gpstracking_device_tracks\n> WHERE gpstracking_device_tracks.rnumber = 1;\n> END;\n>\n>\n> 2013/10/2 Merlin Moncure <[email protected]>\n>\n>> On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto\n>> <[email protected]> wrote:\n>> >\n>> > I need a help on postgresql performance\n>> >\n>> > I have configurate my postgresql files for tunning my server, however\n>> it is\n>> > slow and cpu resources are highter than 120%\n>> >\n>> > I have no idea on how to solve this issue, I was trying to search more\n>> infor\n>> > on google but is not enough, I also have try autovacum sentences and\n>> reindex\n>> > db, but it continues beeing slow\n>> >\n>> > My app is a gps listener that insert more than 6000 records per minutes\n>> > using a tcp server developed on python twisted, where there is no\n>> problems,\n>> > the problem is when I try to follow the gps devices on a map on a\n>> relatime,\n>> > I am doing queries each 6 seconds to my database from my django app, for\n>> > request last position using a stored procedure, but the query get slow\n>> on\n>> > more than 50 devices and cpu start to using more than 120% of its\n>> resources\n>> >\n>> > Django App connect the postgres database directly, and tcp listener\n>> server\n>> > for teh devices connect database on threaded way using pgbouncer, I\n>> have not\n>> > using my django web app on pgbouncer caause I dont want to crash gps\n>> devices\n>> > connection on the pgbouncer\n>> >\n>> > I hoe you could help on get a better performance\n>> >\n>> > I am attaching my store procedure, my conf files and my cpu, memory\n>> > information\n>> >\n>> > **Stored procedure**\n>> >\n>> > CREATE OR REPLACE FUNCTION gps_get_live_location (\n>> > _imeis varchar(8)\n>> > )\n>> > RETURNS TABLE (\n>> > imei varchar,\n>> > device_id integer,\n>> > date_time_process timestamp with time zone,\n>> > latitude double precision,\n>> > longitude double precision,\n>> > course smallint,\n>> > speed smallint,\n>> > mileage integer,\n>> > gps_signal smallint,\n>> > gsm_signal smallint,\n>> > alarm_status boolean,\n>> > gsm_status boolean,\n>> > vehicle_status boolean,\n>> > alarm_over_speed boolean,\n>> > other text,\n>> > address varchar\n>> > ) AS $func$\n>> > DECLARE\n>> > arr varchar[];\n>> > BEGIN\n>> > arr := regexp_split_to_array(_imeis, E'\\\\s+');\n>> > FOR i IN 1..array_length(arr, 1) LOOP\n>> > RETURN QUERY\n>> > SELECT\n>> > gpstracking_device_tracks.imei,\n>> > gpstracking_device_tracks.device_id,\n>> > gpstracking_device_tracks.date_time_process,\n>> > gpstracking_device_tracks.latitude,\n>> > gpstracking_device_tracks.longitude,\n>> > gpstracking_device_tracks.course,\n>> > gpstracking_device_tracks.speed,\n>> > gpstracking_device_tracks.mileage,\n>> > gpstracking_device_tracks.gps_signal,\n>> > gpstracking_device_tracks.gsm_signal,\n>> > gpstracking_device_tracks.alarm_status,\n>> > gpstracking_device_tracks.gps_status,\n>> > gpstracking_device_tracks.vehicle_status,\n>> > gpstracking_device_tracks.alarm_over_speed,\n>> > gpstracking_device_tracks.other,\n>> > gpstracking_device_tracks.address\n>> > FROM gpstracking_device_tracks\n>> > WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n>> > AND gpstracking_device_tracks.date_time_process >=\n>> date_trunc('hour',\n>> > now())\n>> > AND gpstracking_device_tracks.date_time_process <= NOW()\n>> > ORDER BY gpstracking_device_tracks.date_time_process DESC\n>> > LIMIT 1;\n>> > END LOOP;\n>> > RETURN;\n>> > END;\n>> > $func$\n>> > LANGUAGE plpgsql VOLATILE SECURITY DEFINER;\n>>\n>>\n>> Why are you doing this in a loop? What's the point of the LIMIT 1?\n>> You can almost certainly refactor this procedure into a vanilla query.\n>>\n>> merlin\n>>\n>\n>\n>\n> --\n> Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n> GNU Linux Admin | PHP Senior Web Developer\n> Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n> GTalk: [email protected] | Skype: csotelop\n> MSN: [email protected] | Yahoo: csotelop\n> GNULinux RU #379182 | GNULinux RM #277661\n> GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B\n>\n\nHey short trick :to avoid to use the schema name multiple time (more readable and more easy to re use).You can use the SET search_path gpstracking_device_tracks, public;\n(see manual here : http://www.postgresql.org/docs/current/static/sql-set.html)\nCheers,Rémi-C\n2013/10/2 Carlos Eduardo Sotelo Pinto <[email protected]>\nThanks to allI have fix that refactoring the function \nBEGIN    arr := regexp_split_to_array(_imeis, E'\\\\s+'); \n RETURN QUERY  SELECT gpstracking_device_tracks.imei,\n gpstracking_device_tracks.device_id,  gpstracking_device_tracks.date_time_process, gpstracking_device_tracks.latitude,\n gpstracking_device_tracks.longitude, gpstracking_device_tracks.course, gpstracking_device_tracks.speed,\n gpstracking_device_tracks.mileage, gpstracking_device_tracks.gps_signal, gpstracking_device_tracks.gsm_signal,\n gpstracking_device_tracks.alarm_status, gpstracking_device_tracks.gps_status, gpstracking_device_tracks.vehicle_status,\n gpstracking_device_tracks.alarm_over_speed, gpstracking_device_tracks.other, gpstracking_device_tracks.address\n FROM ( SELECT  gpstracking_device_tracks.imei,\n gpstracking_device_tracks.device_id,  gpstracking_device_tracks.date_time_process, gpstracking_device_tracks.latitude,\n gpstracking_device_tracks.longitude, gpstracking_device_tracks.course, gpstracking_device_tracks.speed,\n gpstracking_device_tracks.mileage, gpstracking_device_tracks.gps_signal, gpstracking_device_tracks.gsm_signal,\n gpstracking_device_tracks.alarm_status, gpstracking_device_tracks.gps_status, gpstracking_device_tracks.vehicle_status,\n gpstracking_device_tracks.alarm_over_speed, gpstracking_device_tracks.other, gpstracking_device_tracks.address,\n ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY gpstracking_device_tracks.date_time_process DESC) as rnumber FROM gpstracking_device_tracks \n WHERE gpstracking_device_tracks.imei = ANY(arr) AND gpstracking_device_tracks.date_time_process >= date_trunc('hour', now()) \n AND gpstracking_device_tracks.date_time_process <= NOW() ) AS gpstracking_device_tracks WHERE gpstracking_device_tracks.rnumber = 1;\nEND;2013/10/2 Merlin Moncure <[email protected]>\n\nOn Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto\n<[email protected]> wrote:\n>\n> I need a help on postgresql performance\n>\n> I have configurate my postgresql files for tunning my server, however it is\n> slow and cpu resources are highter than 120%\n>\n> I have no idea on how to solve this issue, I was trying to search more infor\n> on google but is not enough, I also have try autovacum sentences and reindex\n> db, but it continues beeing slow\n>\n> My app is a gps listener that insert more than 6000 records per minutes\n> using a tcp server developed on python twisted, where there is no problems,\n> the problem is when I try to follow the gps devices on a map on a relatime,\n> I am doing queries each 6 seconds to my database from my django app, for\n> request last position using a stored procedure, but the query get slow on\n> more than 50 devices and cpu start to using more than 120% of its resources\n>\n> Django App connect the postgres database directly, and tcp listener server\n> for teh devices connect database on threaded way using pgbouncer, I have not\n> using my django web app on pgbouncer caause I dont want to crash gps devices\n> connection on the pgbouncer\n>\n> I hoe you could help on get a better performance\n>\n> I am attaching my store procedure, my conf files and my cpu, memory\n> information\n>\n> **Stored procedure**\n>\n>     CREATE OR REPLACE FUNCTION gps_get_live_location (\n>     _imeis varchar(8)\n>     )\n>     RETURNS TABLE (\n>     imei varchar,\n>     device_id integer,\n>     date_time_process timestamp with time zone,\n>     latitude double precision,\n>     longitude double precision,\n>     course smallint,\n>     speed smallint,\n>     mileage integer,\n>     gps_signal smallint,\n>     gsm_signal smallint,\n>     alarm_status boolean,\n>     gsm_status boolean,\n>     vehicle_status boolean,\n>     alarm_over_speed boolean,\n>     other text,\n>     address varchar\n>     ) AS $func$\n>     DECLARE\n>     arr varchar[];\n>     BEGIN\n>         arr := regexp_split_to_array(_imeis, E'\\\\s+');\n>     FOR i IN 1..array_length(arr, 1) LOOP\n>     RETURN QUERY\n>     SELECT\n>     gpstracking_device_tracks.imei,\n>     gpstracking_device_tracks.device_id,\n>     gpstracking_device_tracks.date_time_process,\n>     gpstracking_device_tracks.latitude,\n>     gpstracking_device_tracks.longitude,\n>     gpstracking_device_tracks.course,\n>     gpstracking_device_tracks.speed,\n>     gpstracking_device_tracks.mileage,\n>     gpstracking_device_tracks.gps_signal,\n>     gpstracking_device_tracks.gsm_signal,\n>     gpstracking_device_tracks.alarm_status,\n>     gpstracking_device_tracks.gps_status,\n>     gpstracking_device_tracks.vehicle_status,\n>     gpstracking_device_tracks.alarm_over_speed,\n>     gpstracking_device_tracks.other,\n>     gpstracking_device_tracks.address\n>     FROM gpstracking_device_tracks\n>     WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR\n>     AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',\n> now())\n>     AND gpstracking_device_tracks.date_time_process <= NOW()\n>     ORDER BY gpstracking_device_tracks.date_time_process DESC\n>     LIMIT 1;\n>     END LOOP;\n>     RETURN;\n>     END;\n>     $func$\n>     LANGUAGE plpgsql VOLATILE SECURITY DEFINER;\n\n\nWhy are you doing this in a loop?  What's the point of the LIMIT 1?\nYou can almost certainly refactor this procedure into a vanilla query.\n\nmerlin\n-- \n\nCarlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter\n\n    GNU Linux Admin | PHP Senior Web Developer    Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794\n\n    GTalk: [email protected] | Skype: csotelop\n\n    MSN: [email protected] | Yahoo: csotelop    GNULinux RU #379182 | GNULinux RM #277661\n\nGPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B", "msg_date": "Fri, 4 Oct 2013 10:46:06 +0200", "msg_from": "=?UTF-8?Q?R=C3=A9mi_Cura?= <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_=5BGENERAL=5D_Re=3A_=5BGENERAL=5D_Help_on_=E1=B9=95erformance?=" } ]
[ { "msg_contents": "If we reset the statistics counters using pg_stat_reset() will it affect\nthe performance of the database? Eg are these the same statistics used by\nthe planner?\nThanks\n\nIf we reset the statistics counters using pg_stat_reset() will it affect the performance of the database? Eg are these the same statistics used by the planner?\nThanks", "msg_date": "Tue, 1 Oct 2013 14:50:29 +0300", "msg_from": "Xenofon Papadopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Reseting statistics counters" }, { "msg_contents": "Hello\n\n\n2013/10/1 Xenofon Papadopoulos <[email protected]>\n\n> If we reset the statistics counters using pg_stat_reset() will it affect\n> the performance of the database? Eg are these the same statistics used by\n> the planner?\n> Thanks\n>\n>\nthese statistics are used only for autovacuum, what I know. So you can\nimpact a autovacuum, but no planner\n\nRegards\n\nPavel\n\nHello2013/10/1 Xenofon Papadopoulos <[email protected]>\nIf we reset the statistics counters using pg_stat_reset() will it affect the performance of the database? Eg are these the same statistics used by the planner?\nThanks\nthese statistics are used only for autovacuum, what I know. So you can impact a autovacuum, but no plannerRegardsPavel", "msg_date": "Tue, 1 Oct 2013 13:56:36 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reseting statistics counters" }, { "msg_contents": "Sorry for the late reply, but as far as I know when you run pg_stat_reset()\nyou should always run analyze manually of the database to populate the\nstatistics.\n\n\nStrahinja Kustudić | Lead System Engineer | Nordeus\n\n\nOn Tue, Oct 1, 2013 at 1:50 PM, Xenofon Papadopoulos <[email protected]>wrote:\n\n> If we reset the statistics counters using pg_stat_reset() will it affect\n> the performance of the database? Eg are these the same statistics used by\n> the planner?\n> Thanks\n>\n>\n\nSorry for the late reply, but as far as I know when you run pg_stat_reset() you should always run analyze manually of the database to populate the statistics.\n\nStrahinja Kustudić | Lead System Engineer | Nordeus\n\nOn Tue, Oct 1, 2013 at 1:50 PM, Xenofon Papadopoulos <[email protected]> wrote:\nIf we reset the statistics counters using pg_stat_reset() will it affect the performance of the database? Eg are these the same statistics used by the planner?\nThanks", "msg_date": "Sat, 16 Nov 2013 20:49:19 +0100", "msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reseting statistics counters" }, { "msg_contents": "On 16.11.2013 20:49, Strahinja Kustudić wrote:\n> Sorry for the late reply, but as far as I know when you run \n> pg_stat_reset() you should always run analyze manually of the\n> database to populate the statistics.\n\nWhy?\n\nThere are two kinds of stats in the database - stats used by the planner\n(common column values, histograms, ...) and runtime stats. pg_stat_reset\nonly deals with the latter, it won't discard histograms or anything like\nthat.\n\nAnd runnning analyze won't magically populate the runtime stats - for\nexample how could it populate number of sequential/index scans or the\ntimestamp of the last autovacuum?\n\nThe only thing that may be influenced by this is autovacuum, because\nthis will remove timestamp of the last run on the tables, number of\ndeleted/dead tuples etc. So it will be invoked on all tables, collecting\nthis info.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 07 Dec 2013 00:31:40 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reseting statistics counters" } ]
[ { "msg_contents": "Howdy,\n\nI'm going to post this in 2 parts as I think it's too big for 1 post.\n\nEnvironment:\n\nPG 8.4.17\nLinux Ubuntu 10.04\nTotal RAM - 1G\n\nThings that have been performed:\n\n\n1. Explain on SELECT.\n\n2. ANALYZE database.\n\n3. VACUUM database.\n\n4. shared_buffers = 256M\n\n5. effective_cache_size = 768M\n\n6. work_mem = 512M\n\nTable DDL:\n\nnms=# \\d syslog\n View \"public.syslog\"\n Column | Type | Modifiers\n----------+-----------------------------+-----------\nip | inet |\nfacility | character varying(10) |\nlevel | character varying(10) |\ndatetime | timestamp without time zone |\nprogram | character varying(25) |\nmsg | text |\nseq | bigint |\nView definition:\nSELECT syslog_master.ip, syslog_master.facility, syslog_master.level, syslog_master.datetime, syslog_master.program, syslog_master.msg, syslog_master.seq\n FROM syslog_master;\nRules:\nsyslog_insert_201308 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-08-01'::date AND new.datetime < '2013-09-01'::date DO INSTEAD INSERT INTO syslog_201308 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201309 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-09-01'::date AND new.datetime < '2013-10-01'::date DO INSTEAD INSERT INTO syslog_201309 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201310 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-10-01'::date AND new.datetime < '2013-11-01'::date DO INSTEAD INSERT INTO syslog_201310 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_null AS\n ON INSERT TO syslog DO INSTEAD NOTHING\n\nnms=#\n\nnms=# \\d devices\nhostname | character varying(20) |\nhostpop | character varying(20) |\nhostgroup | character varying(20) |\nrack | character varying(10) |\nasset | character varying(10) |\nip | inet |\nsnmprw | character varying(20) |\nsnmpro | character varying(20) |\nsnmpver | character varying(3) |\nconsole | character varying(20) |\npsu1 | character varying(20) |\npsu2 | character varying(20) |\npsu3 | character varying(20) |\npsu4 | character varying(20) |\nalias1 | character varying(20) |\nalias2 | character varying(20) |\nfailure | character varying(255) |\nmodified | timestamp without time zone | not null default now()\nmodified_by | character varying(20) |\nactive | character(1) | default 't'::bpchar\nrad_secret | character varying(20) |\nrad_atr | character varying(40) |\nsnmpdev | integer |\nnetflow | text |\ncpu | integer |\ntemp | integer |\nfirmware_type_id | bigint | default 1\nIndexes:\n \"id_pkey\" PRIMARY KEY, btree (id)\n \"devices_active_index\" btree (active)\n \"devices_failure\" btree (failure)\n \"devices_hostgroup\" btree (hostgroup)\n \"devices_hostname\" btree (hostname)\n \"devices_hostpop\" btree (hostpop)\n \"devices_ip_index\" btree (ip)\n \"devices_snmprw\" btree (snmprw)\nForeign-key constraints:\n \"devices_firmware_type_id_fkey\" FOREIGN KEY (firmware_type_id) REFERENCES firmware_type(id)\nReferenced by:\n TABLE \"ac_attributes\" CONSTRAINT \"ac_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"acls_matrix\" CONSTRAINT \"acls_matrix_device_id_fkey\" FOREIGN KEY (device_id) REFERENCES devices(id) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"ip_local_pool_aggregates\" CONSTRAINT \"ip_local_pool_aggregates_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id)\n TABLE \"ipsla_instances\" CONSTRAINT \"ipsla_instances_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"lns_attributes\" CONSTRAINT \"lns_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n\n(END)\n\nnms=# \\d mongroups\n Table \"public.mongroups\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\nhostgroup | character varying(20) |\nlocale | text |\ndepartment | character varying(20) |\nIndexes:\n \"ukey_hostgroup_department\" UNIQUE, btree (hostgroup, department)\n\nnms=#\n\n<end part I>\n\nThank you,\n\nSam\n\n\n\n\n\n\n\n\n\nHowdy,\n \nI’m going to post this in 2 parts as I think it’s too big for 1 post.\n \nEnvironment:\n \nPG 8.4.17\nLinux Ubuntu 10.04\nTotal RAM – 1G\n \nThings that have been performed:\n \n1.      \nExplain on SELECT.\n2.      \nANALYZE database.\n3.      \nVACUUM database.\n4.      \nshared_buffers = 256M\n5.      \neffective_cache_size = 768M\n6.      \nwork_mem = 512M\n \nTable DDL:\n \nnms=# \\d syslog\n                View \"public.syslog\"\n  Column  |            Type             | Modifiers\n----------+-----------------------------+-----------\nip       | inet                        |\nfacility | character varying(10)       |\nlevel    | character varying(10)       |\ndatetime | timestamp without time zone |\nprogram  | character varying(25)       |\nmsg      | text                        |\nseq      | bigint                      |\nView definition:\nSELECT syslog_master.ip, syslog_master.facility, syslog_master.level, syslog_master.datetime, syslog_master.program, syslog_master.msg, syslog_master.seq\n   FROM syslog_master;\nRules:\nsyslog_insert_201308 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-08-01'::date AND new.datetime < '2013-09-01'::date DO INSTEAD  INSERT INTO syslog_201308 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201309 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-09-01'::date AND new.datetime < '2013-10-01'::date DO INSTEAD  INSERT INTO syslog_201309 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201310 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-10-01'::date AND new.datetime < '2013-11-01'::date DO INSTEAD  INSERT INTO syslog_201310 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_null AS\n    ON INSERT TO syslog DO INSTEAD NOTHING\n \nnms=#\n \nnms=# \\d devices\nhostname         | character varying(20)       |\nhostpop          | character varying(20)       |\nhostgroup        | character varying(20)       |\nrack             | character varying(10)       |\nasset            | character varying(10)       |\nip               | inet                        |\nsnmprw           | character varying(20)       |\nsnmpro           | character varying(20)       |\nsnmpver          | character varying(3)        |\nconsole          | character varying(20)       |\npsu1             | character varying(20)       |\npsu2             | character varying(20)       |\npsu3             | character varying(20)       |\npsu4             | character varying(20)       |\nalias1           | character varying(20)       |\nalias2           | character varying(20)       |\nfailure          | character varying(255)      |\nmodified         | timestamp without time zone | not null default now()\nmodified_by      | character varying(20)       |\nactive           | character(1)                | default 't'::bpchar\nrad_secret       | character varying(20)       |\nrad_atr          | character varying(40)       |\nsnmpdev          | integer                     |\nnetflow          | text                        |\ncpu              | integer                     |\ntemp             | integer                     |\nfirmware_type_id | bigint                      | default 1\nIndexes:\n    \"id_pkey\" PRIMARY KEY, btree (id)\n    \"devices_active_index\" btree (active)\n    \"devices_failure\" btree (failure)\n    \"devices_hostgroup\" btree (hostgroup)\n    \"devices_hostname\" btree (hostname)\n    \"devices_hostpop\" btree (hostpop)\n    \"devices_ip_index\" btree (ip)\n    \"devices_snmprw\" btree (snmprw)\nForeign-key constraints:\n    \"devices_firmware_type_id_fkey\" FOREIGN KEY (firmware_type_id) REFERENCES firmware_type(id)\nReferenced by:\n    TABLE \"ac_attributes\" CONSTRAINT \"ac_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n    TABLE \"acls_matrix\" CONSTRAINT \"acls_matrix_device_id_fkey\" FOREIGN KEY (device_id) REFERENCES devices(id) ON UPDATE CASCADE ON DELETE CASCADE\n    TABLE \"ip_local_pool_aggregates\" CONSTRAINT \"ip_local_pool_aggregates_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id)\n    TABLE \"ipsla_instances\" CONSTRAINT \"ipsla_instances_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id) ON DELETE CASCADE\n    TABLE \"lns_attributes\" CONSTRAINT \"lns_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n \n(END)\n \nnms=# \\d mongroups\n            Table \"public.mongroups\"\n   Column   |         Type          | Modifiers\n------------+-----------------------+-----------\nhostgroup  | character varying(20) |\nlocale     | text                  |\ndepartment | character varying(20) |\nIndexes:\n    \"ukey_hostgroup_department\" UNIQUE, btree (hostgroup, department)\n \nnms=#\n \n<end part I>\n \nThank you,\n \nSam", "msg_date": "Thu, 3 Oct 2013 00:56:24 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "57 minute SELECT" }, { "msg_contents": "Ok, let's try 3 parts:\n\nTable counts:\n\nsyslog - 150200285\ndevices - 3291\nmongroups - 71\n\nThe query:\n\nSELECT syslog.ip,\n syslog.msg,\n syslog.datetime,\n devices.hostname,\n devices.hostpop\nFROM syslog,\n devices\nWHERE syslog.ip IN\n (SELECT ip\n FROM devices,\n mongroups\n WHERE (active = 't'\n OR active = 's')\n AND devices.hostgroup = mongroups.hostgroup\n AND devices.hostname || '.' || devices.hostpop ~* E'pe1.mel4'\n AND devices.id != '1291')\n AND datetime <= '2013-08-01 00:00:00'\n AND datetime >= '2013-04-12 00:00:00'\n AND syslog.ip = devices.ip\n AND (devices.active = 't'\n OR devices.active = 's');\n\n<end part II>\n\nThank you,\n\nSam\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Samuel Stearns\nSent: Thursday, 3 October 2013 10:26 AM\nTo: [email protected]\nSubject: [PERFORM] 57 minute SELECT\n\nHowdy,\n\nI'm going to post this in 2 parts as I think it's too big for 1 post.\n\nEnvironment:\n\nPG 8.4.17\nLinux Ubuntu 10.04\nTotal RAM - 1G\n\nThings that have been performed:\n\n\n1. Explain on SELECT.\n\n2. ANALYZE database.\n\n3. VACUUM database.\n\n4. shared_buffers = 256M\n\n5. effective_cache_size = 768M\n\n6. work_mem = 512M\n\nTable DDL:\n\nnms=# \\d syslog\n View \"public.syslog\"\n Column | Type | Modifiers\n----------+-----------------------------+-----------\nip | inet |\nfacility | character varying(10) |\nlevel | character varying(10) |\ndatetime | timestamp without time zone |\nprogram | character varying(25) |\nmsg | text |\nseq | bigint |\nView definition:\nSELECT syslog_master.ip, syslog_master.facility, syslog_master.level, syslog_master.datetime, syslog_master.program, syslog_master.msg, syslog_master.seq\n FROM syslog_master;\nRules:\nsyslog_insert_201308 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-08-01'::date AND new.datetime < '2013-09-01'::date DO INSTEAD INSERT INTO syslog_201308 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201309 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-09-01'::date AND new.datetime < '2013-10-01'::date DO INSTEAD INSERT INTO syslog_201309 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201310 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-10-01'::date AND new.datetime < '2013-11-01'::date DO INSTEAD INSERT INTO syslog_201310 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_null AS\n ON INSERT TO syslog DO INSTEAD NOTHING\n\nnms=#\n\nnms=# \\d devices\nhostname | character varying(20) |\nhostpop | character varying(20) |\nhostgroup | character varying(20) |\nrack | character varying(10) |\nasset | character varying(10) |\nip | inet |\nsnmprw | character varying(20) |\nsnmpro | character varying(20) |\nsnmpver | character varying(3) |\nconsole | character varying(20) |\npsu1 | character varying(20) |\npsu2 | character varying(20) |\npsu3 | character varying(20) |\npsu4 | character varying(20) |\nalias1 | character varying(20) |\nalias2 | character varying(20) |\nfailure | character varying(255) |\nmodified | timestamp without time zone | not null default now()\nmodified_by | character varying(20) |\nactive | character(1) | default 't'::bpchar\nrad_secret | character varying(20) |\nrad_atr | character varying(40) |\nsnmpdev | integer |\nnetflow | text |\ncpu | integer |\ntemp | integer |\nfirmware_type_id | bigint | default 1\nIndexes:\n \"id_pkey\" PRIMARY KEY, btree (id)\n \"devices_active_index\" btree (active)\n \"devices_failure\" btree (failure)\n \"devices_hostgroup\" btree (hostgroup)\n \"devices_hostname\" btree (hostname)\n \"devices_hostpop\" btree (hostpop)\n \"devices_ip_index\" btree (ip)\n \"devices_snmprw\" btree (snmprw)\nForeign-key constraints:\n \"devices_firmware_type_id_fkey\" FOREIGN KEY (firmware_type_id) REFERENCES firmware_type(id)\nReferenced by:\n TABLE \"ac_attributes\" CONSTRAINT \"ac_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"acls_matrix\" CONSTRAINT \"acls_matrix_device_id_fkey\" FOREIGN KEY (device_id) REFERENCES devices(id) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"ip_local_pool_aggregates\" CONSTRAINT \"ip_local_pool_aggregates_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id)\n TABLE \"ipsla_instances\" CONSTRAINT \"ipsla_instances_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"lns_attributes\" CONSTRAINT \"lns_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n\n(END)\n\nnms=# \\d mongroups\n Table \"public.mongroups\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\nhostgroup | character varying(20) |\nlocale | text |\ndepartment | character varying(20) |\nIndexes:\n \"ukey_hostgroup_department\" UNIQUE, btree (hostgroup, department)\n\nnms=#\n\n<end part I>\n\nThank you,\n\nSam\n\n\n\n\n\n\n\n\n\nOk, let’s try 3 parts:\n \nTable counts:\n \nsyslog – 150200285\ndevices – 3291\nmongroups – 71\n \nThe query:\n \nSELECT syslog.ip,\n       syslog.msg,\n       syslog.datetime,\n       devices.hostname,\n       devices.hostpop\nFROM syslog,\n     devices\nWHERE syslog.ip IN\n    (SELECT ip\n     FROM devices,\n          mongroups\n     WHERE (active = 't'\n            OR active = 's')\n       AND devices.hostgroup = mongroups.hostgroup\n       AND devices.hostname || '.' || devices.hostpop ~* E'pe1.mel4'\n       AND devices.id != '1291')\n  AND datetime <= '2013-08-01 00:00:00'\n  AND datetime >= '2013-04-12 00:00:00'\n  AND syslog.ip = devices.ip\n  AND (devices.active = 't'\n       OR devices.active = 's');\n \n<end part II>\n \nThank you,\n \nSam\n \n\n\nFrom: [email protected]\n [mailto:[email protected]] On Behalf Of Samuel Stearns\nSent: Thursday, 3 October 2013 10:26 AM\nTo: [email protected]\nSubject: [PERFORM] 57 minute SELECT\n\n\n \nHowdy,\n \nI’m going to post this in 2 parts as I think it’s too big for 1 post.\n \nEnvironment:\n \nPG 8.4.17\nLinux Ubuntu 10.04\nTotal RAM – 1G\n \nThings that have been performed:\n \n1.      \nExplain on SELECT.\n2.      \nANALYZE database.\n3.      \nVACUUM database.\n4.      \nshared_buffers = 256M\n5.      \neffective_cache_size = 768M\n6.      \nwork_mem = 512M\n \nTable DDL:\n \nnms=# \\d syslog\n                View \"public.syslog\"\n  Column  |            Type             | Modifiers\n----------+-----------------------------+-----------\nip       | inet                        |\nfacility | character varying(10)       |\nlevel    | character varying(10)       |\ndatetime | timestamp without time zone |\nprogram  | character varying(25)       |\nmsg      | text                        |\nseq      | bigint                      |\nView definition:\nSELECT syslog_master.ip, syslog_master.facility, syslog_master.level, syslog_master.datetime, syslog_master.program, syslog_master.msg, syslog_master.seq\n   FROM syslog_master;\nRules:\nsyslog_insert_201308 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-08-01'::date AND new.datetime < '2013-09-01'::date DO INSTEAD  INSERT INTO syslog_201308 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201309 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-09-01'::date AND new.datetime < '2013-10-01'::date DO INSTEAD  INSERT INTO syslog_201309 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201310 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-10-01'::date AND new.datetime < '2013-11-01'::date DO INSTEAD  INSERT INTO syslog_201310 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_null AS\n    ON INSERT TO syslog DO INSTEAD NOTHING\n \nnms=#\n \nnms=# \\d devices\nhostname         | character varying(20)       |\nhostpop          | character varying(20)       |\nhostgroup        | character varying(20)       |\nrack             | character varying(10)       |\nasset            | character varying(10)       |\nip               | inet                        |\nsnmprw           | character varying(20)       |\nsnmpro           | character varying(20)       |\nsnmpver          | character varying(3)        |\nconsole          | character varying(20)       |\npsu1             | character varying(20)       |\npsu2             | character varying(20)       |\npsu3             | character varying(20)       |\npsu4             | character varying(20)       |\nalias1           | character varying(20)       |\nalias2           | character varying(20)       |\nfailure          | character varying(255)      |\nmodified         | timestamp without time zone | not null default now()\nmodified_by      | character varying(20)       |\nactive           | character(1)                | default 't'::bpchar\nrad_secret       | character varying(20)       |\nrad_atr          | character varying(40)       |\nsnmpdev          | integer                     |\nnetflow          | text                        |\ncpu              | integer                     |\ntemp             | integer                     |\nfirmware_type_id | bigint                      | default 1\nIndexes:\n    \"id_pkey\" PRIMARY KEY, btree (id)\n    \"devices_active_index\" btree (active)\n    \"devices_failure\" btree (failure)\n    \"devices_hostgroup\" btree (hostgroup)\n    \"devices_hostname\" btree (hostname)\n    \"devices_hostpop\" btree (hostpop)\n    \"devices_ip_index\" btree (ip)\n    \"devices_snmprw\" btree (snmprw)\nForeign-key constraints:\n    \"devices_firmware_type_id_fkey\" FOREIGN KEY (firmware_type_id) REFERENCES firmware_type(id)\nReferenced by:\n    TABLE \"ac_attributes\" CONSTRAINT \"ac_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n    TABLE \"acls_matrix\" CONSTRAINT \"acls_matrix_device_id_fkey\" FOREIGN KEY (device_id) REFERENCES devices(id) ON UPDATE CASCADE ON DELETE CASCADE\n    TABLE \"ip_local_pool_aggregates\" CONSTRAINT \"ip_local_pool_aggregates_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id)\n    TABLE \"ipsla_instances\" CONSTRAINT \"ipsla_instances_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id) ON DELETE CASCADE\n    TABLE \"lns_attributes\" CONSTRAINT \"lns_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n \n(END)\n \nnms=# \\d mongroups\n            Table \"public.mongroups\"\n   Column   |         Type          | Modifiers\n------------+-----------------------+-----------\nhostgroup  | character varying(20) |\nlocale     | text                  |\ndepartment | character varying(20) |\nIndexes:\n    \"ukey_hostgroup_department\" UNIQUE, btree (hostgroup, department)\n \nnms=#\n \n<end part I>\n \nThank you,\n \nSam", "msg_date": "Thu, 3 Oct 2013 01:03:51 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "Samuel Stearns-2 wrote\n> Total RAM - 1G\n> \n> \n> 1. Explain on SELECT.\n\nSo either this is a typo (1 GB of RAM) or your query is likely ending up I/O\nbound.\n\nYou should probably provide EXPLAIN and EXPLAIN (ANALYZE) output since even\nwith the schema it is impossible for someone to see what the planner is\nproposing for a multiple-million record source table that is going to be\nempty if all someone does is create the schema.\n\nFor my money it is also helpful to actual write some prose describing what\nyou are providing and seeing and not just toss some settings and schema out\nthere.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/57-minute-SELECT-tp5773169p5773174.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Oct 2013 18:17:14 -0700 (PDT)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "The last part, the EXPLAIN, is too big to send. Is there an alternative way I can get it too you, other than chopping it up and sending in multiple parts?\n\nThank you,\n\nSam\n\n\nFrom: Samuel Stearns\nSent: Thursday, 3 October 2013 10:34 AM\nTo: Samuel Stearns; [email protected]\nSubject: RE: 57 minute SELECT\n\nOk, let's try 3 parts:\n\nTable counts:\n\nsyslog - 150200285\ndevices - 3291\nmongroups - 71\n\nThe query:\n\nSELECT syslog.ip,\n syslog.msg,\n syslog.datetime,\n devices.hostname,\n devices.hostpop\nFROM syslog,\n devices\nWHERE syslog.ip IN\n (SELECT ip\n FROM devices,\n mongroups\n WHERE (active = 't'\n OR active = 's')\n AND devices.hostgroup = mongroups.hostgroup\n AND devices.hostname || '.' || devices.hostpop ~* E'pe1.mel4'\n AND devices.id != '1291')\n AND datetime <= '2013-08-01 00:00:00'\n AND datetime >= '2013-04-12 00:00:00'\n AND syslog.ip = devices.ip\n AND (devices.active = 't'\n OR devices.active = 's');\n\n<end part II>\n\nThank you,\n\nSam\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Samuel Stearns\nSent: Thursday, 3 October 2013 10:26 AM\nTo: [email protected]\nSubject: [PERFORM] 57 minute SELECT\n\nHowdy,\n\nI'm going to post this in 2 parts as I think it's too big for 1 post.\n\nEnvironment:\n\nPG 8.4.17\nLinux Ubuntu 10.04\nTotal RAM - 1G\n\nThings that have been performed:\n\n\n1. Explain on SELECT.\n\n2. ANALYZE database.\n\n3. VACUUM database.\n\n4. shared_buffers = 256M\n\n5. effective_cache_size = 768M\n\n6. work_mem = 512M\n\nTable DDL:\n\nnms=# \\d syslog\n View \"public.syslog\"\n Column | Type | Modifiers\n----------+-----------------------------+-----------\nip | inet |\nfacility | character varying(10) |\nlevel | character varying(10) |\ndatetime | timestamp without time zone |\nprogram | character varying(25) |\nmsg | text |\nseq | bigint |\nView definition:\nSELECT syslog_master.ip, syslog_master.facility, syslog_master.level, syslog_master.datetime, syslog_master.program, syslog_master.msg, syslog_master.seq\n FROM syslog_master;\nRules:\nsyslog_insert_201308 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-08-01'::date AND new.datetime < '2013-09-01'::date DO INSTEAD INSERT INTO syslog_201308 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201309 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-09-01'::date AND new.datetime < '2013-10-01'::date DO INSTEAD INSERT INTO syslog_201309 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201310 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-10-01'::date AND new.datetime < '2013-11-01'::date DO INSTEAD INSERT INTO syslog_201310 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_null AS\n ON INSERT TO syslog DO INSTEAD NOTHING\n\nnms=#\n\nnms=# \\d devices\nhostname | character varying(20) |\nhostpop | character varying(20) |\nhostgroup | character varying(20) |\nrack | character varying(10) |\nasset | character varying(10) |\nip | inet |\nsnmprw | character varying(20) |\nsnmpro | character varying(20) |\nsnmpver | character varying(3) |\nconsole | character varying(20) |\npsu1 | character varying(20) |\npsu2 | character varying(20) |\npsu3 | character varying(20) |\npsu4 | character varying(20) |\nalias1 | character varying(20) |\nalias2 | character varying(20) |\nfailure | character varying(255) |\nmodified | timestamp without time zone | not null default now()\nmodified_by | character varying(20) |\nactive | character(1) | default 't'::bpchar\nrad_secret | character varying(20) |\nrad_atr | character varying(40) |\nsnmpdev | integer |\nnetflow | text |\ncpu | integer |\ntemp | integer |\nfirmware_type_id | bigint | default 1\nIndexes:\n \"id_pkey\" PRIMARY KEY, btree (id)\n \"devices_active_index\" btree (active)\n \"devices_failure\" btree (failure)\n \"devices_hostgroup\" btree (hostgroup)\n \"devices_hostname\" btree (hostname)\n \"devices_hostpop\" btree (hostpop)\n \"devices_ip_index\" btree (ip)\n \"devices_snmprw\" btree (snmprw)\nForeign-key constraints:\n \"devices_firmware_type_id_fkey\" FOREIGN KEY (firmware_type_id) REFERENCES firmware_type(id)\nReferenced by:\n TABLE \"ac_attributes\" CONSTRAINT \"ac_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"acls_matrix\" CONSTRAINT \"acls_matrix_device_id_fkey\" FOREIGN KEY (device_id) REFERENCES devices(id) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"ip_local_pool_aggregates\" CONSTRAINT \"ip_local_pool_aggregates_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id)\n TABLE \"ipsla_instances\" CONSTRAINT \"ipsla_instances_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"lns_attributes\" CONSTRAINT \"lns_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n\n(END)\n\nnms=# \\d mongroups\n Table \"public.mongroups\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\nhostgroup | character varying(20) |\nlocale | text |\ndepartment | character varying(20) |\nIndexes:\n \"ukey_hostgroup_department\" UNIQUE, btree (hostgroup, department)\n\nnms=#\n\n<end part I>\n\nThank you,\n\nSam\n\n\n\n\n\n\n\n\n\nThe last part, the EXPLAIN, is too big to send.  Is there an alternative way I can get it too you, other than chopping it up and sending in multiple parts?\n \nThank you,\n \nSam\n \n \n\n\nFrom: Samuel\n Stearns \nSent: Thursday, 3 October 2013 10:34 AM\nTo: Samuel Stearns; [email protected]\nSubject: RE: 57 minute SELECT\n\n\n \nOk, let’s try 3 parts:\n \nTable counts:\n \nsyslog – 150200285\ndevices – 3291\nmongroups – 71\n \nThe query:\n \nSELECT syslog.ip,\n       syslog.msg,\n       syslog.datetime,\n       devices.hostname,\n       devices.hostpop\nFROM syslog,\n     devices\nWHERE syslog.ip IN\n    (SELECT ip\n     FROM devices,\n          mongroups\n     WHERE (active = 't'\n            OR active = 's')\n       AND devices.hostgroup = mongroups.hostgroup\n       AND devices.hostname || '.' || devices.hostpop ~* E'pe1.mel4'\n       AND devices.id != '1291')\n  AND datetime <= '2013-08-01 00:00:00'\n  AND datetime >= '2013-04-12 00:00:00'\n  AND syslog.ip = devices.ip\n  AND (devices.active = 't'\n       OR devices.active = 's');\n \n<end part II>\n \nThank you,\n \nSam\n \n\n\nFrom: [email protected]\n [mailto:[email protected]] On Behalf Of Samuel Stearns\nSent: Thursday, 3 October 2013 10:26 AM\nTo: [email protected]\nSubject: [PERFORM] 57 minute SELECT\n\n\n \nHowdy,\n \nI’m going to post this in 2 parts as I think it’s too big for 1 post.\n \nEnvironment:\n \nPG 8.4.17\nLinux Ubuntu 10.04\nTotal RAM – 1G\n \nThings that have been performed:\n \n1.      \nExplain on SELECT.\n2.      \nANALYZE database.\n3.      \nVACUUM database.\n4.      \nshared_buffers = 256M\n5.      \neffective_cache_size = 768M\n6.      \nwork_mem = 512M\n \nTable DDL:\n \nnms=# \\d syslog\n                View \"public.syslog\"\n  Column  |            Type             | Modifiers\n----------+-----------------------------+-----------\nip       | inet                        |\nfacility | character varying(10)       |\nlevel    | character varying(10)       |\ndatetime | timestamp without time zone |\nprogram  | character varying(25)       |\nmsg      | text                        |\nseq      | bigint                      |\nView definition:\nSELECT syslog_master.ip, syslog_master.facility, syslog_master.level, syslog_master.datetime, syslog_master.program, syslog_master.msg, syslog_master.seq\n   FROM syslog_master;\nRules:\nsyslog_insert_201308 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-08-01'::date AND new.datetime < '2013-09-01'::date DO INSTEAD  INSERT INTO syslog_201308 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201309 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-09-01'::date AND new.datetime < '2013-10-01'::date DO INSTEAD  INSERT INTO syslog_201309 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201310 AS\n    ON INSERT TO syslog\n   WHERE new.datetime >= '2013-10-01'::date AND new.datetime < '2013-11-01'::date DO INSTEAD  INSERT INTO syslog_201310 (ip, facility, level, datetime, program, msg)\n  VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_null AS\n    ON INSERT TO syslog DO INSTEAD NOTHING\n \nnms=#\n \nnms=# \\d devices\nhostname         | character varying(20)       |\nhostpop          | character varying(20)       |\nhostgroup        | character varying(20)       |\nrack             | character varying(10)       |\nasset            | character varying(10)       |\nip               | inet                        |\nsnmprw           | character varying(20)       |\nsnmpro           | character varying(20)       |\nsnmpver          | character varying(3)        |\nconsole          | character varying(20)       |\npsu1             | character varying(20)       |\npsu2             | character varying(20)       |\npsu3             | character varying(20)       |\npsu4             | character varying(20)       |\nalias1           | character varying(20)       |\nalias2           | character varying(20)       |\nfailure          | character varying(255)      |\nmodified         | timestamp without time zone | not null default now()\nmodified_by      | character varying(20)       |\nactive           | character(1)                | default 't'::bpchar\nrad_secret       | character varying(20)       |\nrad_atr          | character varying(40)       |\nsnmpdev          | integer                     |\nnetflow          | text                        |\ncpu              | integer                     |\ntemp             | integer                     |\nfirmware_type_id | bigint                      | default 1\nIndexes:\n    \"id_pkey\" PRIMARY KEY, btree (id)\n    \"devices_active_index\" btree (active)\n    \"devices_failure\" btree (failure)\n    \"devices_hostgroup\" btree (hostgroup)\n    \"devices_hostname\" btree (hostname)\n    \"devices_hostpop\" btree (hostpop)\n    \"devices_ip_index\" btree (ip)\n    \"devices_snmprw\" btree (snmprw)\nForeign-key constraints:\n    \"devices_firmware_type_id_fkey\" FOREIGN KEY (firmware_type_id) REFERENCES firmware_type(id)\nReferenced by:\n    TABLE \"ac_attributes\" CONSTRAINT \"ac_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n    TABLE \"acls_matrix\" CONSTRAINT \"acls_matrix_device_id_fkey\" FOREIGN KEY (device_id) REFERENCES devices(id) ON UPDATE CASCADE ON DELETE CASCADE\n    TABLE \"ip_local_pool_aggregates\" CONSTRAINT \"ip_local_pool_aggregates_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id)\n    TABLE \"ipsla_instances\" CONSTRAINT \"ipsla_instances_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id) ON DELETE CASCADE\n    TABLE \"lns_attributes\" CONSTRAINT \"lns_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n \n(END)\n \nnms=# \\d mongroups\n            Table \"public.mongroups\"\n   Column   |         Type          | Modifiers\n------------+-----------------------+-----------\nhostgroup  | character varying(20) |\nlocale     | text                  |\ndepartment | character varying(20) |\nIndexes:\n    \"ukey_hostgroup_department\" UNIQUE, btree (hostgroup, department)\n \nnms=#\n \n<end part I>\n \nThank you,\n \nSam", "msg_date": "Thu, 3 Oct 2013 01:17:27 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "Total RAM = 1G is correct\n\nThis query executes as the result of a search from our Network Management System Device Audit web tool where the date range is large and is focused on a specific device.\n\nI was thinking it should execute more quickly since syslog.ip has an index and we're not performing any textual matching.\n\nEXPLAIN:\n\nQUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=408.53..1962721.39 rows=98068 width=126) (actual time=30121.265..3419306.752 rows=1929714 loops=1)\n Hash Cond: (public.syslog_master.ip = public.devices.ip)\n -> Nested Loop (cost=209.67..1120466.90 rows=37316116 width=122) (actual time=30117.845..3416690.561 rows=1929714 loops=1)\n Join Filter: (public.syslog_master.ip = public.devices.ip)\n -> HashAggregate (cost=205.40..205.41 rows=1 width=7) (actual time=5.133..5.142 rows=1 loops=1)\n -> Nested Loop (cost=0.00..205.40 rows=1 width=7) (actual time=4.117..5.124 rows=1 loops=1)\n Join Filter: ((public.devices.hostgroup)::text = (mongroups.hostgroup)::text)\n -> Seq Scan on devices (cost=0.00..202.80 rows=1 width=14) (actual time=4.088..5.075 rows=1 loops=1)\n Filter: ((id <> 1291) AND ((active = 't'::bpchar) OR (active = 's'::bpchar)) AND ((((hostname)::text || '.'::text) || (hostpop)::text) ~* 'pe1.mel4'::text))\n -> Seq Scan on mongroups (cost=0.00..1.71 rows=71 width=6) (actual time=0.009..0.017 rows=71 loops=1)\n -> Append (cost=4.27..1114378.69 rows=470624 width=115) (actual time=30112.646..3415201.052 rows=1929766 loops=1)\n -> Bitmap Heap Scan on syslog_master (cost=4.27..9.61 rows=2 width=72) (actual time=0.027..0.027 rows=0 loops=1)\n Recheck Cond: (public.syslog_master.ip = public.devices.ip)\n Filter: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on syslog_master_ip_idx (cost=0.00..4.27 rows=2 width=0) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (public.syslog_master.ip = public.devices.ip)\n -> Bitmap Heap Scan on syslog_201307 syslog_master (cost=4175.37..355209.50 rows=150004 width=112) (actual time=30112.618..686289.128 rows=297015 loops=1)\n Recheck Cond: (public.syslog_master.ip = public.devices.ip)\n Filter: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David Johnston\nSent: Thursday, 3 October 2013 10:47 AM\nTo: [email protected]\nSubject: Re: [PERFORM] 57 minute SELECT\n\nSamuel Stearns-2 wrote\n> Total RAM - 1G\n> \n> \n> 1. Explain on SELECT.\n\nSo either this is a typo (1 GB of RAM) or your query is likely ending up I/O bound.\n\nYou should probably provide EXPLAIN and EXPLAIN (ANALYZE) output since even with the schema it is impossible for someone to see what the planner is proposing for a multiple-million record source table that is going to be empty if all someone does is create the schema.\n\nFor my money it is also helpful to actual write some prose describing what you are providing and seeing and not just toss some settings and schema out there.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/57-minute-SELECT-tp5773169p5773174.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 01:30:18 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "zone))\n -> Bitmap Index Scan on syslog_master_ip_idx (cost=0.00..4.27 rows=2 width=0) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (public.syslog_master.ip = public.devices.ip)\n -> Bitmap Heap Scan on syslog_201307 syslog_master (cost=4175.37..355209.50 rows=150004 width=112) (actual time=30112.618..686289.128 rows=297015 loops=1)\n Recheck Cond: (public.syslog_master.ip = public.devices.ip)\n Filter: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on syslog_201307_ip_idx (cost=0.00..4137.88 rows=150004 width=0) (actual time=30040.703..30040.703 rows=297015 loops=1)\n Index Cond: (public.syslog_master.ip = public.devices.ip)\n -> Index Scan using syslog_201308_datetime_idx on syslog_201308 syslog_master (cost=0.00..8.46 rows=1 width=108) (actual time=30.022..124.656 rows=52 loops=1)\n Index Cond: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time zone))\n -> Bitmap Heap Scan on syslog_201304 syslog_master (cost=1829.05..235809.88 rows=98010 width=117) (actual time=1049.606..875045.663 rows=320488 loops=1)\n Recheck Cond: (public.syslog_master.ip = public.devices.ip)\n Filter: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on syslog_201304_ip_idx (cost=0.00..1813.37 rows=98010 width=0) (actual time=984.401..984.401 rows=505789 loops=1)\n Index Cond: (public.syslog_master.ip = public.devices.ip)\n -> Bitmap Heap Scan on syslog_201305 syslog_master (cost=2157.14..264759.11 rows=114937 width=115) (actual time=926.035..910323.922 rows=520315 loops=1)\n Recheck Cond: (public.syslog_master.ip = public.devices.ip)\n Filter: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on syslog_201305_ip_idx (cost=0.00..2128.41 rows=114937 width=0) (actual time=864.109..864.109 rows=520315 loops=1)\n Index Cond: (public.syslog_master.ip = public.devices.ip)\n -> Bitmap Heap Scan on syslog_201306 syslog_master (cost=2020.92..258582.12 rows=107670 width=117) (actual time=1948.265..942909.424 rows=791896 loops=1)\n Recheck Cond: (public.syslog_master.ip = public.devices.ip)\n Filter: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on syslog_201306_ip_idx (cost=0.00..1994.01 rows=107670 width=0) (actual time=1896.295..1896.295 rows=791896 loops=1)\n Index Cond: (public.syslog_master.ip = public.devices.ip)\n -> Hash (cost=170.08..170.08 rows=2303 width=18) (actual time=3.398..3.398 rows=2386 loops=1)\n -> Seq Scan on devices (cost=0.00..170.08 rows=2303 width=18) (actual time=0.017..2.407 rows=2387 loops=1)\n Filter: ((active = 't'::bpchar) OR (active = 's'::bpchar))\n Total runtime: 3419878.638 ms\n\nThank you,\n\nSam\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Samuel Stearns\nSent: Thursday, 3 October 2013 11:00 AM\nTo: David Johnston; [email protected]\nSubject: Re: [PERFORM] 57 minute SELECT\n\nTotal RAM = 1G is correct\n\nThis query executes as the result of a search from our Network Management System Device Audit web tool where the date range is large and is focused on a specific device.\n\nI was thinking it should execute more quickly since syslog.ip has an index and we're not performing any textual matching.\n\nEXPLAIN:\n\nQUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=408.53..1962721.39 rows=98068 width=126) (actual time=30121.265..3419306.752 rows=1929714 loops=1)\n Hash Cond: (public.syslog_master.ip = public.devices.ip)\n -> Nested Loop (cost=209.67..1120466.90 rows=37316116 width=122) (actual time=30117.845..3416690.561 rows=1929714 loops=1)\n Join Filter: (public.syslog_master.ip = public.devices.ip)\n -> HashAggregate (cost=205.40..205.41 rows=1 width=7) (actual time=5.133..5.142 rows=1 loops=1)\n -> Nested Loop (cost=0.00..205.40 rows=1 width=7) (actual time=4.117..5.124 rows=1 loops=1)\n Join Filter: ((public.devices.hostgroup)::text = (mongroups.hostgroup)::text)\n -> Seq Scan on devices (cost=0.00..202.80 rows=1 width=14) (actual time=4.088..5.075 rows=1 loops=1)\n Filter: ((id <> 1291) AND ((active = 't'::bpchar) OR (active = 's'::bpchar)) AND ((((hostname)::text || '.'::text) || (hostpop)::text) ~* 'pe1.mel4'::text))\n -> Seq Scan on mongroups (cost=0.00..1.71 rows=71 width=6) (actual time=0.009..0.017 rows=71 loops=1)\n -> Append (cost=4.27..1114378.69 rows=470624 width=115) (actual time=30112.646..3415201.052 rows=1929766 loops=1)\n -> Bitmap Heap Scan on syslog_master (cost=4.27..9.61 rows=2 width=72) (actual time=0.027..0.027 rows=0 loops=1)\n Recheck Cond: (public.syslog_master.ip = public.devices.ip)\n Filter: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on syslog_master_ip_idx (cost=0.00..4.27 rows=2 width=0) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (public.syslog_master.ip = public.devices.ip)\n -> Bitmap Heap Scan on syslog_201307 syslog_master (cost=4175.37..355209.50 rows=150004 width=112) (actual time=30112.618..686289.128 rows=297015 loops=1)\n Recheck Cond: (public.syslog_master.ip = public.devices.ip)\n Filter: ((public.syslog_master.datetime <= '2013-08-01 00:00:00'::timestamp without time zone) AND (public.syslog_master.datetime >= '2013-04-12 00:00:00'::timestamp without time\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David Johnston\nSent: Thursday, 3 October 2013 10:47 AM\nTo: [email protected]\nSubject: Re: [PERFORM] 57 minute SELECT\n\nSamuel Stearns-2 wrote\n> Total RAM - 1G\n> \n> \n> 1. Explain on SELECT.\n\nSo either this is a typo (1 GB of RAM) or your query is likely ending up I/O bound.\n\nYou should probably provide EXPLAIN and EXPLAIN (ANALYZE) output since even with the schema it is impossible for someone to see what the planner is proposing for a multiple-million record source table that is going to be empty if all someone does is create the schema.\n\nFor my money it is also helpful to actual write some prose describing what you are providing and seeing and not just toss some settings and schema out there.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/57-minute-SELECT-tp5773169p5773174.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 01:35:25 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "On Wed, Oct 2, 2013 at 10:17 PM, Samuel Stearns\n<[email protected]> wrote:\n> The last part, the EXPLAIN, is too big to send. Is there an alternative way\n> I can get it too you, other than chopping it up and sending in multiple\n> parts?\n\n\nTry explain.depesz.com\n\n\nOn Wed, Oct 2, 2013 at 10:30 PM, Samuel Stearns\n<[email protected]> wrote:\n>\n> EXPLAIN:\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=408.53..1962721.39 rows=98068 width=126) (actual time=30121.265..3419306.752 rows=1929714 loops=1)\n> Hash Cond: (public.syslog_master.ip = public.devices.ip)\n\nSo your query is returning 2M rows.\n\nI think you should try lowering work_mem. 512M seems oversized for a\nquery this complex on a system with 1G. You may be thrashing the OS\ncache.\n\nAlso, you seem to have a problem with constraint exclusion. Some of\nthose bitmap heap scans aren't necessary, and the planner should know\nit. Are you missing the corresponding CHECK constraints on datetime?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Oct 2013 22:45:41 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "Thanks, Claudio:\n\nhttp://explain.depesz.com/s/WJQx\n\n\n-----Original Message-----\nFrom: Claudio Freire [mailto:[email protected]] \nSent: Thursday, 3 October 2013 11:16 AM\nTo: Samuel Stearns\nCc: David Johnston; [email protected]\nSubject: Re: [PERFORM] 57 minute SELECT\n\nOn Wed, Oct 2, 2013 at 10:17 PM, Samuel Stearns <[email protected]> wrote:\n> The last part, the EXPLAIN, is too big to send. Is there an \n> alternative way I can get it too you, other than chopping it up and \n> sending in multiple parts?\n\n\nTry explain.depesz.com\n\n\nOn Wed, Oct 2, 2013 at 10:30 PM, Samuel Stearns <[email protected]> wrote:\n>\n> EXPLAIN:\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> - Hash Join (cost=408.53..1962721.39 rows=98068 width=126) (actual \n> time=30121.265..3419306.752 rows=1929714 loops=1)\n> Hash Cond: (public.syslog_master.ip = public.devices.ip)\n\nSo your query is returning 2M rows.\n\nI think you should try lowering work_mem. 512M seems oversized for a query this complex on a system with 1G. You may be thrashing the OS cache.\n\nAlso, you seem to have a problem with constraint exclusion. Some of those bitmap heap scans aren't necessary, and the planner should know it. Are you missing the corresponding CHECK constraints on datetime?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 01:47:29 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "Samuel Stearns-2 wrote\n> EXPLAIN:\n> \n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=408.53..1962721.39 rows=98068 width=126) (actual\n> time=30121.265..3419306.752 rows=1929714 loops=1)\n\nYou are selecting and returning 2 million records...how fast do you want\nthis to run? For some reason I read 57 seconds initially - I guess 57\nminutes is a bit much...but the most obvious solution is RAM.\n\nMight want to include buffers output in the explain as well but:\n\nI'm doubting the contents of your result fit into the server memory so your\ndisk is involved which will severely slow down processing.\n\nHopefully someone more knowledgeable and experience will chime in to help\nyou.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/57-minute-SELECT-tp5773169p5773187.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Oct 2013 18:48:41 -0700 (PDT)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "Thanks, David.\n\nCan't run EXPLAIN (ANALYZE, BUFFERS) as I'm on 8.4.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David Johnston\nSent: Thursday, 3 October 2013 11:19 AM\nTo: [email protected]\nSubject: Re: [PERFORM] 57 minute SELECT\n\nSamuel Stearns-2 wrote\n> EXPLAIN:\n> \n> QUERY PLAN \n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> - Hash Join (cost=408.53..1962721.39 rows=98068 width=126) (actual\n> time=30121.265..3419306.752 rows=1929714 loops=1)\n\nYou are selecting and returning 2 million records...how fast do you want this to run? For some reason I read 57 seconds initially - I guess 57 minutes is a bit much...but the most obvious solution is RAM.\n\nMight want to include buffers output in the explain as well but:\n\nI'm doubting the contents of your result fit into the server memory so your disk is involved which will severely slow down processing.\n\nHopefully someone more knowledgeable and experience will chime in to help you.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/57-minute-SELECT-tp5773169p5773187.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 02:04:33 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "On Wed, Oct 2, 2013 at 10:47 PM, Samuel Stearns\n<[email protected]> wrote:\n> Thanks, Claudio:\n>\n> http://explain.depesz.com/s/WJQx\n\nIf you have a test database, and if it doesn't hurt other queries of\ncourse, try clustering on the ip index.\n\nI believe your problem is that the index isn't helping much, it's\nprobably hurting you in fact. If you cluster over ip, however, the\nscan will go almost sequentially, and there will be no wasted bytes in\nthe pages fetched, which will be much friendlier on your I/O\nsubsystem.\n\nIf I were in your shoes, I'd cluster each of the monthly tables as\nthey become inactive.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 01:13:58 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "Thanks, Claudio.\n\nI'll have a look at the clustering.\n\nWe have also noticed that the same query with a datetime range of 3 hours (rather than 4 months) runs in just 30 seconds:\n\nAND datetime <= '2013-10-03 10:03:49'\nAND datetime >= '2013-10-03 07:03:49'\n\n\n-----Original Message-----\nFrom: Claudio Freire [mailto:[email protected]] \nSent: Thursday, 3 October 2013 1:44 PM\nTo: Samuel Stearns\nCc: David Johnston; [email protected]\nSubject: Re: [PERFORM] 57 minute SELECT\n\nOn Wed, Oct 2, 2013 at 10:47 PM, Samuel Stearns <[email protected]> wrote:\n> Thanks, Claudio:\n>\n> http://explain.depesz.com/s/WJQx\n\nIf you have a test database, and if it doesn't hurt other queries of course, try clustering on the ip index.\n\nI believe your problem is that the index isn't helping much, it's probably hurting you in fact. If you cluster over ip, however, the scan will go almost sequentially, and there will be no wasted bytes in the pages fetched, which will be much friendlier on your I/O subsystem.\n\nIf I were in your shoes, I'd cluster each of the monthly tables as they become inactive.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 04:19:29 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "On 03/10/2013 03:17, Samuel Stearns wrote:\n> The last part, the EXPLAIN, is too big to send. Is there an alternative\n> way I can get it too you, other than chopping it up and sending in\n> multiple parts?\n\nThe usual way is via http://explain.depesz.com/ .", "msg_date": "Thu, 03 Oct 2013 12:08:01 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "On Thu, Oct 03, 2013 at 04:19:29AM +0000, Samuel Stearns wrote:\n> Thanks, Claudio.\n> \n> I'll have a look at the clustering.\n> \n> We have also noticed that the same query with a datetime range of 3 hours (rather than 4 months) runs in just 30 seconds:\n> \n> AND datetime <= '2013-10-03 10:03:49'\n> AND datetime >= '2013-10-03 07:03:49'\n> \n\nHi Samuel,\n\nThat is even worse performance relatively. 30s for a 3 hour range equals\n28800s for a 4 month (2880 hours) range, or 8 hours. I definitely would\nconsider clustering.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 07:59:18 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "On Thu, Oct 03, 2013 at 01:47:29AM +0000, Samuel Stearns wrote:\n- Thanks, Claudio:\n- \n- http://explain.depesz.com/s/WJQx\n\nYou're spending a lot of time in the hash join which can kill a system with\nlow ram.\n\nYou may, just for fun, want to try the query with enable_hashjoin=false.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 09:20:52 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "On Thu, Oct 03, 2013 at 09:20:52AM -0700, David Kerr wrote:\n- On Thu, Oct 03, 2013 at 01:47:29AM +0000, Samuel Stearns wrote:\n- - Thanks, Claudio:\n- - \n- - http://explain.depesz.com/s/WJQx\n- \n- You're spending a lot of time in the hash join which can kill a system with\n- low ram.\n- \n- You may, just for fun, want to try the query with enable_hashjoin=false.\n\nSorry, ignore that, more coffe before mailing lists for me.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 09:25:00 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "Thanks for all the advice here. I'll look at setting up something in a test environment and play with the clustering. Testing how other queries perform against the clustering, also.\n\nThank you!\n\nSam\n\n\n-----Original Message-----\nFrom: David Kerr [mailto:[email protected]] \nSent: Friday, 4 October 2013 1:55 AM\nTo: Samuel Stearns\nCc: Claudio Freire; David Johnston; [email protected]\nSubject: Re: [PERFORM] 57 minute SELECT\n\nOn Thu, Oct 03, 2013 at 09:20:52AM -0700, David Kerr wrote:\n- On Thu, Oct 03, 2013 at 01:47:29AM +0000, Samuel Stearns wrote:\n- - Thanks, Claudio:\n- - \n- - http://explain.depesz.com/s/WJQx\n- \n- You're spending a lot of time in the hash join which can kill a system with\n- low ram.\n- \n- You may, just for fun, want to try the query with enable_hashjoin=false.\n\nSorry, ignore that, more coffe before mailing lists for me.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 22:49:20 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" }, { "msg_contents": "Missed the 2nd part of Claudio's reply here.\n\nI actually tried different settings of work_mem up to 512M which didn't make any difference.\n\nCheck constraints appear to be there:\n\nnms=# \\d syslog_201304\n Table \"public.syslog_201304\"\n Column | Type | Modifiers\n----------+-----------------------------+-------------------------------------------------------------\n ip | inet |\n facility | character varying(10) |\n level | character varying(10) |\n datetime | timestamp without time zone |\n program | character varying(25) |\n msg | text |\n seq | bigint | not null default nextval('syslog_master_seq_seq'::regclass)\nIndexes:\n \"syslog_201304_datetime_idx\" btree (datetime)\n \"syslog_201304_ip_idx\" btree (ip)\n \"syslog_201304_seq_idx\" btree (seq)\nCheck constraints:\n \"syslog_201304_datetime_check\" CHECK (datetime >= '2013-04-01'::date AND datetime < '2013-05-01'::date)\nInherits: syslog_master\n\nnms=#\n\n\n-----Original Message-----\nFrom: Claudio Freire [mailto:[email protected]] \nSent: Thursday, 3 October 2013 11:16 AM\nTo: Samuel Stearns\nCc: David Johnston; [email protected]\nSubject: Re: [PERFORM] 57 minute SELECT\n\nOn Wed, Oct 2, 2013 at 10:17 PM, Samuel Stearns <[email protected]> wrote:\n> The last part, the EXPLAIN, is too big to send. Is there an \n> alternative way I can get it too you, other than chopping it up and \n> sending in multiple parts?\n\n\nTry explain.depesz.com\n\n\nOn Wed, Oct 2, 2013 at 10:30 PM, Samuel Stearns <[email protected]> wrote:\n>\n> EXPLAIN:\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> - Hash Join (cost=408.53..1962721.39 rows=98068 width=126) (actual \n> time=30121.265..3419306.752 rows=1929714 loops=1)\n> Hash Cond: (public.syslog_master.ip = public.devices.ip)\n\nSo your query is returning 2M rows.\n\nI think you should try lowering work_mem. 512M seems oversized for a query this complex on a system with 1G. You may be thrashing the OS cache.\n\nAlso, you seem to have a problem with constraint exclusion. Some of those bitmap heap scans aren't necessary, and the planner should know it. Are you missing the corresponding CHECK constraints on datetime?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Oct 2013 23:05:19 +0000", "msg_from": "Samuel Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 57 minute SELECT" } ]
[ { "msg_contents": "Hi,\nI want to know the reason behind the case:\nMy query processes from JDBC (Java Program) to PostgreSQL. I use system time by invoking java function, I collect one time unit before the query statement perform and second after the execution of query statement. \nI found 85 ms time unit in DOS (win7) (laptop 4cores). both Java and PostgreSQL installed and invoked on the same machine, respectively.\nOn the other hand, I use same process (separate installation) on linux on 8 cores physical machine with 2times greater then laptop. \nI found 150 ms. (which is a question for me because the time in Linux environment should give me half of the time taking on laptop)\nI also make same setting of postgresql.conf in the linux setup, which is available same in the win7 setup, because win7 setup gives better performance of the query.\nWhat do u suggest me, where I need to make performance tuning? which configuration setting must need to modify in the linux?\n * laptop RAM 4 GB and Linux machine 32 GB\nlooking positive response.\n\n\n--\n\nAftab A. Chandio\nPhD Scholar(Research Center for Cloud Computing)\nShenzhen Institutes of Advanced Technology, Chinese Academy of Sciences\n7th Floor, Shenzhen Cloud Computing Center at National Supercomputing Center in Shenzhen (NSCS)\nXueyuan B.1068, University Town, Xili, Shenzhen, China.\n+86 13244762252\n\nLecturer\nInstitutes of Mathematics & Computer Science\nUniversity of Sindh, Jamshoro, Pakistan.\n+92 3003038843\n\n\n\nHi,I want to know the reason behind the case:My query processes from JDBC (Java Program) to PostgreSQL. I use system time by invoking java function, I collect one time unit before the query statement perform and second after the execution of query statement. I found 85 ms time unit in DOS (win7) (laptop 4cores). both Java and PostgreSQL installed and invoked on the same machine, respectively.On the other hand, I use same process (separate installation) on linux on 8 cores physical machine with 2times greater then laptop. I found 150 ms. (which is a question for me because the time in Linux environment should give me half of the time taking on laptop)I also make same setting  of postgresql.conf in the linux setup, which is available same in the win7  setup, because win7 setup gives better performance of the query.What do u suggest me, where I need to make performance tuning? which configuration setting must need to modify in the linux? * laptop RAM 4 GB and Linux machine 32 GBlooking positive response.--Aftab A. ChandioPhD Scholar(Research Center for Cloud Computing)Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences7th Floor, Shenzhen Cloud Computing Center at National Supercomputing Center in Shenzhen (NSCS)Xueyuan B.1068, University Town, Xili, Shenzhen, China.+86 13244762252LecturerInstitutes of Mathematics & Computer ScienceUniversity of Sindh, Jamshoro, Pakistan.+92 3003038843", "msg_date": "Tue, 8 Oct 2013 09:48:18 +0800 (GMT+08:00)", "msg_from": "\"Aftab Ahmed Chandio\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgreSQL query via JDBC in different OS taking different running\n time?" }, { "msg_contents": "Aftab Ahmed Chandio <[email protected]> wrote:\n\n> My query processes from JDBC (Java Program) to PostgreSQL. I use\n> system time by invoking java function, I collect one time unit\n> before the query statement perform and second after the execution\n> of query statement. \n> I found 85 ms time unit in DOS (win7) (laptop 4cores). both Java\n> and PostgreSQL installed and invoked on the same machine,\n> respectively.\n> On the other hand, I use same process (separate installation) on\n> linux on 8 cores physical machine with 2times greater then\n> laptop. \n> I found 150 ms. (which is a question for me because the time in\n> Linux environment should give me half of the time taking on\n> laptop)\n> I also make same setting  of postgresql.conf in the linux setup,\n> which is available same in the win7  setup, because win7 setup\n> gives better performance of the query.\n> What do u suggest me, where I need to make performance tuning?\n> which configuration setting must need to modify in the linux?\n> * laptop RAM 4 GB and Linux machine 32 GB\n\nGiven a little time, I could probably list 100 plausible reasons\nthat could be.  For my part, load balancing a production system\nbetween PostgreSQL on Windows and on Linux hitting identical\ndatabases on identical hardware, I saw 30% better performance on\nLinux.\n\nI would start by getting timings for query execution using EXPLAIN\nANALYZE, to see how PostgreSQL itself is performing on the two\nenvironments.  I would test raw connect/disconnect speed.  I would\nbenchmark RAM using STREAM and disk using bonnie++.\n\nYou might want to review this page, and post a more detailed report\nto the pgsql-performance list:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nPosting to multiple lists is generally considered bad form.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Mon, 7 Oct 2013 20:35:30 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgreSQL query via JDBC in different OS taking different\n running time?" }, { "msg_contents": "On 10/7/2013 6:48 PM, Aftab Ahmed Chandio wrote:\n> I found 85 ms time unit in DOS (win7) (laptop 4cores). both Java and \n> PostgreSQL installed and invoked on the same machine, respectively.\n> On the other hand, I use same process (separate installation) on linux \n> on 8 cores physical machine with 2times greater then laptop.\n> I found 150 ms. (which is a question for me because the time in Linux \n> environment should give me half of the time taking on laptop)\n\na single connection session will only use a single core at a time.\n\ndepending on the nature of this query, it may have been CPU or Disk IO \nbound, without knowing the query, the database schema, and the hardware \nspecification of both systems, its impossible to guess.\n\nfirst thing to do is run..\n\n explain analyze ...your query here...;\n\non both platforms, and verify they are doing the same thing.\n\n\n-- \njohn r pierce 37N 122W\nsomewhere on the middle of the left coast\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Mon, 07 Oct 2013 20:37:56 -0700", "msg_from": "John R Pierce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgreSQL query via JDBC in different OS taking different\n running time?" }, { "msg_contents": "Aftab Ahmed Chandio wrote\n> I found 85 ms time unit in DOS (win7) (laptop 4cores). both Java and\n> PostgreSQL installed and invoked on the same machine, respectively.\n> On the other hand, I use same process (separate installation) on linux on\n> 8 cores physical machine with 2times greater then laptop. \n> I found 150 ms. (which is a question for me because the time in Linux\n> environment should give me half of the time taking on laptop)\n\nI'm not particularly performance measuring experienced but a few items come\nto mind:\n\nA single (or handful) of manual runs is not going to provide good data for\ncomparison\nHard drive characteristics can make a difference\n\nA single query uses a single process/thread so core count is irrelevant\nRAM is likely immaterial though depends heavily on the dataset\n\nThese last two factors is why your \"2times greater\" system is in fact nearly\nidentical to the laptop with respect to its ability to run and single query. \nYour Linux system is probably capable of handling twice the data and\nsimultaneous connections but each connection is limited.\n\nLastly, the execution times - while relatively different - are both quite\nsmall and subject to considerable system noise - which is why a single run\nis insufficient to draw conclusions.\n\nI'm not really sure what kind of positive response you want. There may be\nroom to improve the Linux setup, and using the same configuration on two\ndifference OS is not going to mean they should be expected to provide the\nsame performance, but you need to be much more detailed in what you are\ntesting and your measurement procedure if you expect any actionable advice.\n\nIn order to do performance tuning you need to setup a realistic environment\nwithin with to perform measurements. You are either lacking that or have\nfailed to describe it adequately. Once you can measure those measurements\nwill guide you to where to need to either tweak settings or improve\nhardware.\n\nFinally, the ability and need for configuration changes is highly dependent\nupon the version of PostgreSQL you are running - that should be the first\nthing you disclose.\n\nAnd then, you presume that it is differences in PostgreSQL that are to be\nsolved but you have the entire Java VM to be concerned with as well. \nRunning your queries in psql removes that variable and helps pin-point where\nthe tuning likely needs to occur.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/postgreSQL-query-via-JDBC-in-different-OS-taking-different-running-time-tp5773618p5773637.html\nSent from the PostgreSQL - general mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Mon, 7 Oct 2013 20:44:13 -0700 (PDT)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgreSQL query via JDBC in different OS taking different\n running time?" }, { "msg_contents": "On Tue, Oct 8, 2013 at 3:48 AM, Aftab Ahmed Chandio <[email protected]> wrote:\n> What do u suggest me, where I need to make performance tuning? w hich\n> configuration setting must need to modify in the linux?\n\nWell, others have already pointed out that you should first measure\nyour query on the server. I would point out that the JVM itself could\nbe different or differently configured on Linux/win machines, and this\nwill lead to different results.\nSecond it is not clear to me why are you measuring the same query on\ndifferent machines and OSs, or better, why are you assuming the\nresulting time should be the same.\n\nLuca\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Oct 2013 13:02:57 +0200", "msg_from": "Luca Ferrari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] postgreSQL query via JDBC in different OS taking\n different running time?" }, { "msg_contents": "On Mon, Oct 7, 2013 at 10:35 PM, Kevin Grittner <[email protected]> wrote:\n> Aftab Ahmed Chandio <[email protected]> wrote:\n>\n>> My query processes from JDBC (Java Program) to PostgreSQL. I use\n>> system time by invoking java function, I collect one time unit\n>> before the query statement perform and second after the execution\n>> of query statement.\n>> I found 85 ms time unit in DOS (win7) (laptop 4cores). both Java\n>> and PostgreSQL installed and invoked on the same machine,\n>> respectively.\n>> On the other hand, I use same process (separate installation) on\n>> linux on 8 cores physical machine with 2times greater then\n>> laptop.\n>> I found 150 ms. (which is a question for me because the time in\n>> Linux environment should give me half of the time taking on\n>> laptop)\n>> I also make same setting of postgresql.conf in the linux setup,\n>> which is available same in the win7 setup, because win7 setup\n>> gives better performance of the query.\n>> What do u suggest me, where I need to make performance tuning?\n>> which configuration setting must need to modify in the linux?\n>> * laptop RAM 4 GB and Linux machine 32 GB\n>\n> Given a little time, I could probably list 100 plausible reasons\n> that could be. For my part, load balancing a production system\n> between PostgreSQL on Windows and on Linux hitting identical\n> databases on identical hardware, I saw 30% better performance on\n> Linux.\n\nOne sneaky way that windows tends to beat linux is that windows has a\nlow precision high performance timer that linux does not have. This\naffects both java and postgres and particularly tends to show up when\nbenchmarking with times.\n\nmerlin\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Thu, 10 Oct 2013 16:44:30 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgreSQL query via JDBC in different OS taking\n different running time?" } ]
[ { "msg_contents": "I've been working with partitions (I have split my table in around 3000\npartitions), and if i did some like this\n\nSELECT *\nFROM my_parent_table\nWHERE some_value_in_which_partitioned_my_table in (34, 36, 48)\n\nconstraint_exclusion works, (looking EXPLAIN, I can see that in planner\nonly check the partition_table with CHECH id in 34, 36 and 48. Great!\n\nBut if I increase the amount of ids, constraint_exclusion don't work.\n\nSELECT *\nFROM my_parent_table\nWHERE some_id in (34, 36, 48, 65, ... 234, 310) (in total, 101 different\nids)\n\nWhen i do a query like that, EXPLAIN show me that ALL the partitioned\ntables (3000) are checked (and, of course, the Query is too slow)\n\nWhat is happening?\n\nI've been working with partitions (I have split my table in around 3000 partitions), and if i did some like thisSELECT *FROM my_parent_tableWHERE some_value_in_which_partitioned_my_table in (34, 36, 48)\nconstraint_exclusion works, (looking EXPLAIN, I can see that in planner only check the partition_table with CHECH id in 34, 36 and 48. Great!But if I increase the amount of ids, constraint_exclusion don't work.\nSELECT *FROM my_parent_tableWHERE some_id in (34, 36, 48, 65, ... 234, 310) (in total, 101 different ids)When i do a query like that, EXPLAIN show me that ALL the partitioned tables (3000) are checked (and, of course, the Query is too slow)\nWhat is happening?", "msg_date": "Tue, 8 Oct 2013 12:31:03 -0300", "msg_from": "Marcelo Vega <[email protected]>", "msg_from_op": true, "msg_subject": "Is there a Maximum number of partitions in which constraint_exclusion\n works?" } ]
[ { "msg_contents": "Hi, i want to know what is the pgpool parameter for close the \nconnections directly in the database, because the pgpool II close fine \nthe the childs with the life time, but the connection in the database \ncontinue open in idle state.\n\nthis is my pool config\n\nnum_init_children = 100\nmax_pool = 8\nchild_life_time = 60\nchild_max_connections = 0\nconnection_life_time = 0\nclient_idle_limit = 60\n\n\nand this is my postgresql.conf\n\nmax_connections = 800\nshared_buffers = 2048MB\ntemp_buffers = 64MB\nwork_mem = 2048MB\nmaintenance_work_mem = 2048MB\nwal_buffers = 256\ncheckpoint_segments = 103\n\n\nthanks\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Oct 2013 10:00:51 -0500", "msg_from": "Jeison Bedoya Delgado <[email protected]>", "msg_from_op": true, "msg_subject": "limit connections pgpool" }, { "msg_contents": "On Fri, Oct 11, 2013 at 12:00 AM, Jeison Bedoya Delgado\n<[email protected]> wrote:\n> Hi, i want to know what is the pgpool parameter for close the connections\n> directly in the database, because the pgpool II close fine the the childs\n> with the life time, but the connection in the database continue open in idle\n> state.\nYou should ask that directly to the pgpool mailing lists:\nhttp://www.pgpool.net/mediawiki/index.php/Mailing_lists\n\nRegards,\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Oct 2013 12:10:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit connections pgpool" }, { "msg_contents": "> Hi, i want to know what is the pgpool parameter for close the\n> connections directly in the database, because the pgpool II close fine\n> the the childs with the life time, but the connection in the database\n> continue open in idle state.\n\nThat's the result of connection cache functionality of pgpool-II. If\nyou don't need the connection cache of pgpool-II at all, you could\nturn it off:\n\nconnection_cache = off\n\nOr you could set following to non 0.\n\nconnection_life_time\n\nAfter an idle connection to PostgreSQL backend lasts for\nconnection_life_time seconds, the connection will be turned off.\n\nIf you have further questions, you'd better to subscribe and post the\nquestion:\n\nhttp://www.pgpool.net/mailman/listinfo/pgpool-general\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n\n> this is my pool config\n> \n> num_init_children = 100\n> max_pool = 8\n> child_life_time = 60\n> child_max_connections = 0\n> connection_life_time = 0\n> client_idle_limit = 60\n> \n> \n> and this is my postgresql.conf\n> \n> max_connections = 800\n> shared_buffers = 2048MB\n> temp_buffers = 64MB\n> work_mem = 2048MB\n> maintenance_work_mem = 2048MB\n> wal_buffers = 256\n> checkpoint_segments = 103\n> \n> \n> thanks\n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Oct 2013 12:38:40 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit connections pgpool" } ]
[ { "msg_contents": "Hi,\n\nI'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a 16\nOpteron 6276 CPU box. We limit connections to roughly 120, but our webapp\nis configured to allocate a thread-local connection, so those connections\nare rarely doing anything more than half the time.\n\nWe have been running smoothly for over a year on this configuration, and\nrecently started having huge CPU spikes that bring the system to its knees.\nGiven that it is a multiuser system, it has been quite hard to pinpoint the\nexact cause, but I think we've narrowed it down to two data import jobs\nthat were running in semi-long transactions (clusters of row inserts).\n\nThe tables affected by these inserts are used in common queries.\n\nThe imports will bring in a row count of perhaps 10k on average covering 4\ntables.\n\nThe insert transactions are at isolation level read committed (the default\nfor the JDBC driver).\n\nWhen the import would run (again, theory...we have not been able to\nreproduce), we would end up maxed out on CPU, with a load average of 50 for\n16 CPUs (our normal busy usage is a load average of 5 out of 16 CPUs).\n\nWhen looking at the active queries, most of them are against the tables\nthat are affected by these imports.\n\nOur workaround (that is holding at present) was to drop the transactions on\nthose imports (which is not optimal, but fortunately is acceptable for this\nparticular data). This workaround has prevented any further incidents, but\nis of course inconclusive.\n\nDoes this sound familiar to anyone, and if so, please advise.\n\nThanks in advance,\n\nTony Kay\n\nHi,I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a 16 Opteron 6276 CPU box. We limit connections to roughly 120, but our webapp is configured to allocate a thread-local connection, so those connections are rarely doing anything more than half the time.\nWe have been running smoothly for over a year on this configuration, and recently started having huge CPU spikes that bring the system to its knees. Given that it is a multiuser system, it has been quite hard to pinpoint the exact cause, but I think we've narrowed it down to two data import jobs that were running in semi-long transactions (clusters of row inserts).\nThe tables affected by these inserts are used in common queries.The imports will bring in a row count of perhaps 10k on average covering 4 tables.The insert transactions are at isolation level read committed (the default for the JDBC driver).\nWhen the import would run (again, theory...we have not been able to reproduce), we would end up maxed out on CPU, with a load average of 50 for 16 CPUs (our normal busy usage is a load average of 5 out of 16 CPUs).\nWhen looking at the active queries, most of them are against the tables that are affected by these imports.Our workaround (that is holding at present) was to drop the transactions on those imports (which is not optimal, but fortunately is acceptable for this particular data). This workaround has prevented any further incidents, but is of course inconclusive.\nDoes this sound familiar to anyone, and if so, please advise.Thanks in advance,Tony Kay", "msg_date": "Mon, 14 Oct 2013 16:00:14 -0700", "msg_from": "Tony Kay <[email protected]>", "msg_from_op": true, "msg_subject": "CPU spikes and transactions" }, { "msg_contents": "Hi Calvin,\n\nYes, I have sar data on all systems going back for years.\n\nSince others are going to probably want to be assured I am really \"reading\nthe data\" right:\n\n- This is 92% user CPU time, 5% sys, and 1% soft\n- On some of the problems, I _do_ see a short spike of pgswpout's (memory\npressure), but again, not enough to end up using much system time\n- The database disks are idle (all data being used is in RAM)..and are\nSSDs....average service times are barely measurable in ms.\n\nIf I had to guess, I'd say it was spinlock misbehavior....I cannot\nunderstand why ekse a transaction blocking other things would drive the\nCPUs so hard into the ground with user time.\n\nTony\n\nTony Kay\n\nTeamUnify, LLC\nTU Corporate Website <http://www.teamunify.com/>\nTU Facebook <http://www.facebook.com/teamunify> | Free OnDeck Mobile\nApps<http://www.teamunify.com/__corp__/ondeck/>\n\n\n\nOn Mon, Oct 14, 2013 at 4:05 PM, Calvin Dodge <[email protected]> wrote:\n\n> Have you tried running \"vmstat 1\" during these times? If so, what is\n> the percentage of WAIT time? Given that IIRC shared buffers should be\n> no more than 25% of installed memory, I wonder if too little is\n> available for system caching of disk reads. A high WAIT percentage\n> would indicate excessive I/O (especially random seeks).\n>\n> Calvin Dodge\n>\n> On Mon, Oct 14, 2013 at 6:00 PM, Tony Kay <[email protected]> wrote:\n> > Hi,\n> >\n> > I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a 16\n> > Opteron 6276 CPU box. We limit connections to roughly 120, but our\n> webapp is\n> > configured to allocate a thread-local connection, so those connections\n> are\n> > rarely doing anything more than half the time.\n> >\n> > We have been running smoothly for over a year on this configuration, and\n> > recently started having huge CPU spikes that bring the system to its\n> knees.\n> > Given that it is a multiuser system, it has been quite hard to pinpoint\n> the\n> > exact cause, but I think we've narrowed it down to two data import jobs\n> that\n> > were running in semi-long transactions (clusters of row inserts).\n> >\n> > The tables affected by these inserts are used in common queries.\n> >\n> > The imports will bring in a row count of perhaps 10k on average covering\n> 4\n> > tables.\n> >\n> > The insert transactions are at isolation level read committed (the\n> default\n> > for the JDBC driver).\n> >\n> > When the import would run (again, theory...we have not been able to\n> > reproduce), we would end up maxed out on CPU, with a load average of 50\n> for\n> > 16 CPUs (our normal busy usage is a load average of 5 out of 16 CPUs).\n> >\n> > When looking at the active queries, most of them are against the tables\n> that\n> > are affected by these imports.\n> >\n> > Our workaround (that is holding at present) was to drop the transactions\n> on\n> > those imports (which is not optimal, but fortunately is acceptable for\n> this\n> > particular data). This workaround has prevented any further incidents,\n> but\n> > is of course inconclusive.\n> >\n> > Does this sound familiar to anyone, and if so, please advise.\n> >\n> > Thanks in advance,\n> >\n> > Tony Kay\n> >\n>\n\nHi Calvin,Yes, I have sar data on all systems going back for years. Since others are going to probably want to be assured I am really \"reading the data\" right:\n- This is 92% user CPU time, 5% sys, and 1% soft- On some of the problems, I _do_ see a short spike of pgswpout's (memory pressure), but again, not enough to end up using much system time\n- The database disks are idle (all data being used is in RAM)..and are SSDs....average service times are barely measurable in ms.If I had to guess, I'd say it was spinlock misbehavior....I cannot understand why ekse a transaction blocking other things would drive the CPUs so hard into the ground with user time.\nTony\nTony KayTeamUnify, LLCTU Corporate Website\nTU Facebook | Free OnDeck Mobile Apps\n\nOn Mon, Oct 14, 2013 at 4:05 PM, Calvin Dodge <[email protected]> wrote:\nHave you tried running \"vmstat 1\" during these times? If so, what is\nthe percentage of WAIT time?  Given that IIRC shared buffers should be\nno more than 25% of installed memory, I wonder if too little is\navailable for system caching of disk reads.  A high WAIT percentage\nwould indicate excessive I/O (especially random seeks).\n\nCalvin Dodge\n\nOn Mon, Oct 14, 2013 at 6:00 PM, Tony Kay <[email protected]> wrote:\n> Hi,\n>\n> I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a 16\n> Opteron 6276 CPU box. We limit connections to roughly 120, but our webapp is\n> configured to allocate a thread-local connection, so those connections are\n> rarely doing anything more than half the time.\n>\n> We have been running smoothly for over a year on this configuration, and\n> recently started having huge CPU spikes that bring the system to its knees.\n> Given that it is a multiuser system, it has been quite hard to pinpoint the\n> exact cause, but I think we've narrowed it down to two data import jobs that\n> were running in semi-long transactions (clusters of row inserts).\n>\n> The tables affected by these inserts are used in common queries.\n>\n> The imports will bring in a row count of perhaps 10k on average covering 4\n> tables.\n>\n> The insert transactions are at isolation level read committed (the default\n> for the JDBC driver).\n>\n> When the import would run (again, theory...we have not been able to\n> reproduce), we would end up maxed out on CPU, with a load average of 50 for\n> 16 CPUs (our normal busy usage is a load average of 5 out of 16 CPUs).\n>\n> When looking at the active queries, most of them are against the tables that\n> are affected by these imports.\n>\n> Our workaround (that is holding at present) was to drop the transactions on\n> those imports (which is not optimal, but fortunately is acceptable for this\n> particular data). This workaround has prevented any further incidents, but\n> is of course inconclusive.\n>\n> Does this sound familiar to anyone, and if so, please advise.\n>\n> Thanks in advance,\n>\n> Tony Kay\n>", "msg_date": "Mon, 14 Oct 2013 16:26:53 -0700", "msg_from": "Tony Kay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On 15.10.2013 01:00, Tony Kay wrote:\n> Hi,\n> \n> I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a\n> 16 Opteron 6276 CPU box. We limit connections to roughly 120, but\n> our webapp is configured to allocate a thread-local connection, so\n> those connections are rarely doing anything more than half the time.\n\nLower your shared buffers to about 20% of your RAM, unless you've tested\nit's actually helping in your particular case. It's unlikely you'll get\nbetter performance by using more than that, especially on older\nversions, so it's wiser to leave the rest for page cache.\n\nIt might even be one of the causes of the performance issue you're\nseeing, as shared buffers are not exactly overhead-free.\n\nSee this for more details on tuning:\n\n http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nYou're on a rather old 9.1.x version, BTW. The last version in this\nbranch is 9.1.10 and there are some important security fixes (e.g. in\n9.1.9). Not sure if there are any fixes relevant to the performance\nissue, though.\n\nA few initial questions:\n\n* What OS are we dealing with?\n\n* So how many active connections are there on average (see\n pg_stat_activity for connections running queries)?\n\n* How much data are we talking about? In total and in the imports?\n\n> We have been running smoothly for over a year on this configuration,\n> and recently started having huge CPU spikes that bring the system to\n> its knees. Given that it is a multiuser system, it has been quite\n> hard to pinpoint the exact cause, but I think we've narrowed it down\n> to two data import jobs that were running in semi-long transactions\n> (clusters of row inserts).\n> \n> The tables affected by these inserts are used in common queries.\n> \n> The imports will bring in a row count of perhaps 10k on average\n> covering 4 tables.\n> \n> The insert transactions are at isolation level read committed (the \n> default for the JDBC driver).\n> \n> When the import would run (again, theory...we have not been able to \n> reproduce), we would end up maxed out on CPU, with a load average of\n> 50 for 16 CPUs (our normal busy usage is a load average of 5 out of\n> 16 CPUs).\n>\n> When looking at the active queries, most of them are against the\n> tables that are affected by these imports.\n\nWhich processes consume most CPU time? Are those backends executing the\nqueries, or some background processes (checkpointer, autovacuum, ...)?\n\nCan you post a \"top -c\" output collected at the time of the CPU peak?\n\nAlso, try to collect a few snapshots of pg_stat_bgwriter catalog before\nand during the loads. Don't forget to include the timestamp:\n\n select now(), * from pg_stat_bgwriter;\n\nand when you're at it, pg_stat_database snapshots might be handy too\n(not sure if you're running a single database or multiple ones), so use\neither\n\n select now(), * from pg_stat_database;\n\nor\n\n select now(), * from pg_stat_database where datname = '..dbname..';\n\nThat should give us at least some insight into what's happening.\n\n> Our workaround (that is holding at present) was to drop the\n> transactions on those imports (which is not optimal, but fortunately\n> is acceptable for this particular data). This workaround has\n> prevented any further incidents, but is of course inconclusive.\n> \n> Does this sound familiar to anyone, and if so, please advise.\n\nI'm wondering how this could be related to the transactions, and IIRC\nthe stats (e.g. # of inserted rows) are sent at commit time. That might\ntrigger the autovacuum. But without the transactions the autovacuum\nwould be triggered sooner ...\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Oct 2013 01:42:17 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On 15.10.2013 01:26, Tony Kay wrote:\n> Hi Calvin,\n> \n> Yes, I have sar data on all systems going back for years. \n> \n> Since others are going to probably want to be assured I am really\n> \"reading the data\" right:\n> \n> - This is 92% user CPU time, 5% sys, and 1% soft\n> - On some of the problems, I _do_ see a short spike of pgswpout's\n> (memory pressure), but again, not enough to end up using much system time\n> - The database disks are idle (all data being used is in RAM)..and are\n> SSDs....average service times are barely measurable in ms.\n\nOK. Can you share the data? Maybe we'll notice something suspicious.\n\n> If I had to guess, I'd say it was spinlock misbehavior....I cannot \n> understand why ekse a transaction blocking other things would drive\n> the CPUs so hard into the ground with user time.\n\nHave you tried running perf, to verify the time is actually spent on\nspinlocks?\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Oct 2013 01:45:39 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On Mon, Oct 14, 2013 at 6:45 PM, Tomas Vondra <[email protected]> wrote:\n> On 15.10.2013 01:26, Tony Kay wrote:\n>> Hi Calvin,\n>>\n>> Yes, I have sar data on all systems going back for years.\n>>\n>> Since others are going to probably want to be assured I am really\n>> \"reading the data\" right:\n>>\n>> - This is 92% user CPU time, 5% sys, and 1% soft\n>> - On some of the problems, I _do_ see a short spike of pgswpout's\n>> (memory pressure), but again, not enough to end up using much system time\n>> - The database disks are idle (all data being used is in RAM)..and are\n>> SSDs....average service times are barely measurable in ms.\n>\n> OK. Can you share the data? Maybe we'll notice something suspicious.\n>\n>> If I had to guess, I'd say it was spinlock misbehavior....I cannot\n>> understand why ekse a transaction blocking other things would drive\n>> the CPUs so hard into the ground with user time.\n>\n> Have you tried running perf, to verify the time is actually spent on\n> spinlocks?\n\n+1 this. It is almost certainly spinlocks, but we need to know which\none and why. plz install debug symbols and run a perf during normal\nand high load conditions.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Oct 2013 08:00:44 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On Mon, Oct 14, 2013 at 4:42 PM, Tomas Vondra <[email protected]> wrote:\n\n> On 15.10.2013 01:00, Tony Kay wrote:\n> > Hi,\n> >\n> > I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a\n> > 16 Opteron 6276 CPU box. We limit connections to roughly 120, but\n> > our webapp is configured to allocate a thread-local connection, so\n> > those connections are rarely doing anything more than half the time.\n>\n> Lower your shared buffers to about 20% of your RAM, unless you've tested\n> it's actually helping in your particular case. It's unlikely you'll get\n> better performance by using more than that, especially on older\n> versions, so it's wiser to leave the rest for page cache.\n>\n> It might even be one of the causes of the performance issue you're\n> seeing, as shared buffers are not exactly overhead-free.\n>\n> See this for more details on tuning:\n>\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n\nI had followed the general directions from several sources years ago, which\nindicate up to 40% of RAM. We've been running very large shared buffers for\n4 years now, but it is difficult to generate a good real load without\ntesting against users, so we have not felt the need to move it around. In\ngeneral, I don't tend to tinker with a setting that has been fine for this\nlong without good reason. I've been wanting to upgrade to the newer\nmmap-based versions of pgsql, but was waiting to re-tune this when I did so.\n\nWhy do you suspect that shared_buffers would cause the behavior I'm seeing?\n\n\n>\n>\n> You're on a rather old 9.1.x version, BTW. The last version in this\n> branch is 9.1.10 and there are some important security fixes (e.g. in\n> 9.1.9). Not sure if there are any fixes relevant to the performance\n> issue, though.\n>\n> An upgrade to 9.1.10 is planned.\n\n\n> A few initial questions:\n>\n> * What OS are we dealing with?\n>\n\nCentOS el6\n\n\n>\n> * So how many active connections are there on average (see\n> pg_stat_activity for connections running queries)?\n>\n\nabout 40-60\n\n\n>\n> * How much data are we talking about? In total and in the imports?\n>\n\n80GB database. The imports are maybe 1-3 MB...often much smaller. 10k rows\nwould be a probably average.\n\n\n>\n> > We have been running smoothly for over a year on this configuration,\n> > and recently started having huge CPU spikes that bring the system to\n> > its knees. Given that it is a multiuser system, it has been quite\n> > hard to pinpoint the exact cause, but I think we've narrowed it down\n> > to two data import jobs that were running in semi-long transactions\n> > (clusters of row inserts).\n> >\n> > The tables affected by these inserts are used in common queries.\n> >\n> > The imports will bring in a row count of perhaps 10k on average\n> > covering 4 tables.\n> >\n> > The insert transactions are at isolation level read committed (the\n> > default for the JDBC driver).\n> >\n> > When the import would run (again, theory...we have not been able to\n> > reproduce), we would end up maxed out on CPU, with a load average of\n> > 50 for 16 CPUs (our normal busy usage is a load average of 5 out of\n> > 16 CPUs).\n> >\n> > When looking at the active queries, most of them are against the\n> > tables that are affected by these imports.\n>\n> Which processes consume most CPU time? Are those backends executing the\n> queries, or some background processes (checkpointer, autovacuum, ...)?\n>\n>\nThe backends executing the queries...most of the queries that seem hung\nusually run in a few ms.\n\n\n> Can you post a \"top -c\" output collected at the time of the CPU peak?\n>\n>\nDon't have process accounting, so I cannot regenerate that; however, I can\ntell you what queries were active at one of them.\n\nThere were 36 of the queries agains table ind_event (which is one affected\nby the import). Those queries usually take 5-10ms, and we never see more\nthan 2 active during normal operation. These had been active for\n_minutes_....a sample of the running queries:\n\ntime_active | datname | procpid | query\n\n-----------------+---------------------+---------+-------------------------------------------\n 00:08:10.891105 | tudb | 9058 | select * from\nmr_uss_ind_event_x where (tu\n 00:08:10.981845 | tudb | 8977 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:08.883347 | tudb | 8930 | select * from\nmr_uss_ind_event_x where org\n 00:07:15.266393 | tudb | 8927 | select * from\nmr_uss_ind_event_x where org\n 00:07:27.587133 | tudb | 11867 | update msg_result set\ndt_result=$1,msg_id=\n 00:08:06.458885 | tudb | 8912 | select * from\nmr_uss_ind_event_x where org\n 00:06:43.036487 | tudb | 8887 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:01.992367 | tudb | 8831 | select * from\nmr_uss_ind_event_x where (tu\n 00:06:59.217721 | tudb | 8816 | select * from\nmr_uss_ind_event_x where org\n 00:07:07.558848 | tudb | 8811 | update md_invoice set\nunbilled_amt=unbille\n 00:07:30.636192 | tudb | 8055 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:26.761053 | tudb | 8053 | update msg_result set\ndt_result=$1,msg_id=\n 00:06:46.021084 | tudb | 8793 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:26.412781 | tudb | 8041 | select * from\nmr_uss_ind_event_x where org\n 00:07:43.315019 | tudb | 8031 | select * from\nmr_uss_ind_event_x where org\n 00:07:42.651143 | tudb | 7990 | select * from\nmr_uss_ind_event_x where org\n 00:06:45.258232 | tudb | 7973 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:46.135027 | tudb | 7961 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:31.814513 | tudb | 7959 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:27.739959 | tudb | 8221 | select * from\nmr_uss_ind_event_x where org\n 00:07:21.554369 | tudb | 8191 | select * from\nmr_uss_ind_event_x where org\n 00:07:30.665133 | tudb | 7953 | select * from\nmr_uss_ind_event_x where org\n 00:07:17.727028 | tudb | 7950 | select * from\nmr_uss_ind_event_x where org\n 00:07:25.657611 | tudb | 7948 | select * from\nmr_uss_ind_event_x where org\n 00:07:28.118856 | tudb | 7939 | select * from\nmr_uss_ind_event_x where org\n 00:07:32.436705 | tudb | 7874 | insert into\nmr_uss_ind_event (prelimtime_c\n 00:08:12.090187 | tudb | 7873 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:19.181981 | tudb | 7914 | select * from\nmr_uss_ind_event_x where (tu\n 00:07:04.234119 | tudb | 7909 | select * from\nmr_uss_ind_event_x where (tu\n 00:06:52.614609 | tudb | 7856 | select * from\nmr_uss_ind_event_x where org\n 00:07:18.667903 | tudb | 7908 | select * from\nmr_uss_ind_event_x where (tu\n\nThe insert listed there is coming from that import...the others are\nquerying a view that includes that table in a join.\n\nAlso, try to collect a few snapshots of pg_stat_bgwriter catalog before\n> and during the loads. Don't forget to include the timestamp:\n>\n> select now(), * from pg_stat_bgwriter;\n>\n>\nThis is a live production system, and it will take me some doing to\ngenerate a load on a test server that triggers the condition. I'll be\ncertain to gather this and the other stats if I can trigger it.\n\n\n> and when you're at it, pg_stat_database snapshots might be handy too\n> (not sure if you're running a single database or multiple ones), so use\n> either\n>\n> select now(), * from pg_stat_database;\n>\n> or\n>\n> select now(), * from pg_stat_database where datname = '..dbname..';\n>\n> That should give us at least some insight into what's happening.\n>\n> > Our workaround (that is holding at present) was to drop the\n> > transactions on those imports (which is not optimal, but fortunately\n> > is acceptable for this particular data). This workaround has\n> > prevented any further incidents, but is of course inconclusive.\n> >\n> > Does this sound familiar to anyone, and if so, please advise.\n>\n> I'm wondering how this could be related to the transactions, and IIRC\n> the stats (e.g. # of inserted rows) are sent at commit time. That might\n> trigger the autovacuum. But without the transactions the autovacuum\n> would be triggered sooner ...\n>\n> regards\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Mon, Oct 14, 2013 at 4:42 PM, Tomas Vondra <[email protected]> wrote:\nOn 15.10.2013 01:00, Tony Kay wrote:\n> Hi,\n>\n> I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a\n> 16 Opteron 6276 CPU box. We limit connections to roughly 120, but\n> our webapp is configured to allocate a thread-local connection, so\n> those connections are rarely doing anything more than half the time.\n\nLower your shared buffers to about 20% of your RAM, unless you've tested\nit's actually helping in your particular case. It's unlikely you'll get\nbetter performance by using more than that, especially on older\nversions, so it's wiser to leave the rest for page cache.\n\nIt might even be one of the causes of the performance issue you're\nseeing, as shared buffers are not exactly overhead-free.\n\nSee this for more details on tuning:\n\n   http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_ServerI had followed the general directions from several sources years ago, which indicate up to 40% of RAM. We've been running very large shared buffers for 4 years now, but it is difficult to generate a good real load without testing against users, so we have not felt the need to move it around. In general, I don't tend to tinker with a setting that has been fine for this long without good reason. I've been wanting to upgrade to the newer mmap-based versions of pgsql, but was waiting to re-tune this when I did so.\nWhy do you suspect that shared_buffers would cause the behavior I'm seeing? \n\n\nYou're on a rather old 9.1.x version, BTW. The last version in this\nbranch is 9.1.10 and there are some important security fixes (e.g. in\n9.1.9). Not sure if there are any fixes relevant to the performance\nissue, though.\nAn upgrade to 9.1.10 is planned. \n\n\nA few initial questions:\n\n* What OS are we dealing with?CentOS el6 \n\n* So how many active connections are there on average (see\n  pg_stat_activity for connections running queries)?about 40-60 \n\n* How much data are we talking about? In total and in the imports?80GB database. The imports are maybe 1-3 MB...often much smaller. 10k rows would be a probably average. \n\n\n> We have been running smoothly for over a year on this configuration,\n> and recently started having huge CPU spikes that bring the system to\n> its knees. Given that it is a multiuser system, it has been quite\n> hard to pinpoint the exact cause, but I think we've narrowed it down\n> to two data import jobs that were running in semi-long transactions\n> (clusters of row inserts).\n>\n> The tables affected by these inserts are used in common queries.\n>\n> The imports will bring in a row count of perhaps 10k on average\n> covering 4 tables.\n>\n> The insert transactions are at isolation level read committed (the\n> default for the JDBC driver).\n>\n> When the import would run (again, theory...we have not been able to\n> reproduce), we would end up maxed out on CPU, with a load average of\n> 50 for 16 CPUs (our normal busy usage is a load average of 5 out of\n> 16 CPUs).\n>\n> When looking at the active queries, most of them are against the\n> tables that are affected by these imports.\n\nWhich processes consume most CPU time? Are those backends executing the\nqueries, or some background processes (checkpointer, autovacuum, ...)?\nThe backends executing the queries...most of the queries that seem hung usually run in a few ms. \n\n\nCan you post a \"top -c\" output collected at the time of the CPU peak?\nDon't have process accounting, so I cannot regenerate that; however, I can tell you what queries were active at one of them. There were 36 of the queries agains table ind_event (which is one affected by the import). Those queries usually take 5-10ms, and we never see more than 2 active during normal operation. These had been active for _minutes_....a sample of the running queries:\ntime_active   |       datname       | procpid |   query                                      -----------------+---------------------+---------+-------------------------------------------\n 00:08:10.891105 | tudb                |    9058 | select * from mr_uss_ind_event_x where (tu 00:08:10.981845 | tudb                |    8977 | select * from mr_uss_ind_event_x where (tu 00:07:08.883347 | tudb                |    8930 | select * from mr_uss_ind_event_x where org\n 00:07:15.266393 | tudb                |    8927 | select * from mr_uss_ind_event_x where org 00:07:27.587133 | tudb                |   11867 | update msg_result set dt_result=$1,msg_id= 00:08:06.458885 | tudb                |    8912 | select * from mr_uss_ind_event_x where org\n 00:06:43.036487 | tudb                |    8887 | select * from mr_uss_ind_event_x where (tu 00:07:01.992367 | tudb                |    8831 | select * from mr_uss_ind_event_x where (tu 00:06:59.217721 | tudb                |    8816 | select * from mr_uss_ind_event_x where org\n 00:07:07.558848 | tudb                |    8811 | update md_invoice set unbilled_amt=unbille 00:07:30.636192 | tudb                |    8055 | select * from mr_uss_ind_event_x where (tu 00:07:26.761053 | tudb                |    8053 | update msg_result set dt_result=$1,msg_id=\n 00:06:46.021084 | tudb                |    8793 | select * from mr_uss_ind_event_x where (tu 00:07:26.412781 | tudb                |    8041 | select * from mr_uss_ind_event_x where org 00:07:43.315019 | tudb                |    8031 | select * from mr_uss_ind_event_x where org\n 00:07:42.651143 | tudb                |    7990 | select * from mr_uss_ind_event_x where org 00:06:45.258232 | tudb                |    7973 | select * from mr_uss_ind_event_x where (tu 00:07:46.135027 | tudb                |    7961 | select * from mr_uss_ind_event_x where (tu\n 00:07:31.814513 | tudb                |    7959 | select * from mr_uss_ind_event_x where (tu 00:07:27.739959 | tudb                |    8221 | select * from mr_uss_ind_event_x where org 00:07:21.554369 | tudb                |    8191 | select * from mr_uss_ind_event_x where org\n 00:07:30.665133 | tudb                |    7953 | select * from mr_uss_ind_event_x where org 00:07:17.727028 | tudb                |    7950 | select * from mr_uss_ind_event_x where org 00:07:25.657611 | tudb                |    7948 | select * from mr_uss_ind_event_x where org\n 00:07:28.118856 | tudb                |    7939 | select * from mr_uss_ind_event_x where org 00:07:32.436705 | tudb                |    7874 | insert into mr_uss_ind_event (prelimtime_c 00:08:12.090187 | tudb                |    7873 | select * from mr_uss_ind_event_x where (tu\n 00:07:19.181981 | tudb                |    7914 | select * from mr_uss_ind_event_x where (tu 00:07:04.234119 | tudb                |    7909 | select * from mr_uss_ind_event_x where (tu 00:06:52.614609 | tudb                |    7856 | select * from mr_uss_ind_event_x where org\n 00:07:18.667903 | tudb                |    7908 | select * from mr_uss_ind_event_x where (tuThe insert listed there is coming from that import...the others are querying a view that includes that table in a join.\n\nAlso, try to collect a few snapshots of pg_stat_bgwriter catalog before\nand during the loads. Don't forget to include the timestamp:\n\n   select now(), * from pg_stat_bgwriter;\nThis is a live production system, and it will take me some doing to generate a load on a test server that triggers the condition. I'll be certain to gather this and the other stats if I can trigger it.\n \nand when you're at it, pg_stat_database snapshots might be handy too\n(not sure if you're running a single database or multiple ones), so use\neither\n\n  select now(), * from pg_stat_database;\n\nor\n\n  select now(), * from pg_stat_database where datname = '..dbname..';\n\nThat should give us at least some insight into what's happening.\n\n> Our workaround (that is holding at present) was to drop the\n> transactions on those imports (which is not optimal, but fortunately\n> is acceptable for this particular data). This workaround has\n> prevented any further incidents, but is of course inconclusive.\n>\n> Does this sound familiar to anyone, and if so, please advise.\n\nI'm wondering how this could be related to the transactions, and IIRC\nthe stats (e.g. # of inserted rows) are sent at commit time. That might\ntrigger the autovacuum. But without the transactions the autovacuum\nwould be triggered sooner ...\n\nregards\nTomas\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 15 Oct 2013 08:59:08 -0700", "msg_from": "Tony Kay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "Thanks for the tip. I forgot there were kernel stats on spinlocks.\n\nI'm not sure we'll be able to get it to tip in a test environment, and\nwe're unwilling to revert the code in production in order to have our users\ntrigger it. We'll try triggering it on our test server, and if we manage,\nI'll get you the stats.\n\nThanks!\n\nTony\n\n\nTony Kay\n\nTeamUnify, LLC\nTU Corporate Website <http://www.teamunify.com/>\nTU Facebook <http://www.facebook.com/teamunify> | Free OnDeck Mobile\nApps<http://www.teamunify.com/__corp__/ondeck/>\n\n\n\nOn Tue, Oct 15, 2013 at 6:00 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Mon, Oct 14, 2013 at 6:45 PM, Tomas Vondra <[email protected]> wrote:\n> > On 15.10.2013 01:26, Tony Kay wrote:\n> >> Hi Calvin,\n> >>\n> >> Yes, I have sar data on all systems going back for years.\n> >>\n> >> Since others are going to probably want to be assured I am really\n> >> \"reading the data\" right:\n> >>\n> >> - This is 92% user CPU time, 5% sys, and 1% soft\n> >> - On some of the problems, I _do_ see a short spike of pgswpout's\n> >> (memory pressure), but again, not enough to end up using much system\n> time\n> >> - The database disks are idle (all data being used is in RAM)..and are\n> >> SSDs....average service times are barely measurable in ms.\n> >\n> > OK. Can you share the data? Maybe we'll notice something suspicious.\n> >\n> >> If I had to guess, I'd say it was spinlock misbehavior....I cannot\n> >> understand why ekse a transaction blocking other things would drive\n> >> the CPUs so hard into the ground with user time.\n> >\n> > Have you tried running perf, to verify the time is actually spent on\n> > spinlocks?\n>\n> +1 this. It is almost certainly spinlocks, but we need to know which\n> one and why. plz install debug symbols and run a perf during normal\n> and high load conditions.\n>\n> merlin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks for the tip. I forgot there were kernel stats on spinlocks.I'm not sure we'll be able to get it to tip in a test environment, and we're unwilling to revert the code in production in order to have our users trigger it. We'll try triggering it on our test server, and if we manage, I'll get you the stats.\nThanks!Tony\nTony KayTeamUnify, LLCTU Corporate Website\nTU Facebook | Free OnDeck Mobile Apps\n\nOn Tue, Oct 15, 2013 at 6:00 AM, Merlin Moncure <[email protected]> wrote:\nOn Mon, Oct 14, 2013 at 6:45 PM, Tomas Vondra <[email protected]> wrote:\n> On 15.10.2013 01:26, Tony Kay wrote:\n>> Hi Calvin,\n>>\n>> Yes, I have sar data on all systems going back for years.\n>>\n>> Since others are going to probably want to be assured I am really\n>> \"reading the data\" right:\n>>\n>> - This is 92% user CPU time, 5% sys, and 1% soft\n>> - On some of the problems, I _do_ see a short spike of pgswpout's\n>> (memory pressure), but again, not enough to end up using much system time\n>> - The database disks are idle (all data being used is in RAM)..and are\n>> SSDs....average service times are barely measurable in ms.\n>\n> OK. Can you share the data? Maybe we'll notice something suspicious.\n>\n>> If I had to guess, I'd say it was spinlock misbehavior....I cannot\n>> understand why ekse a transaction blocking other things would drive\n>> the CPUs so hard into the ground with user time.\n>\n> Have you tried running perf, to verify the time is actually spent on\n> spinlocks?\n\n+1 this.  It is almost certainly spinlocks, but we need to know which\none and why.  plz install debug symbols and run a perf during normal\nand high load conditions.\n\nmerlin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 15 Oct 2013 09:09:36 -0700", "msg_from": "Tony Kay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On Tue, Oct 15, 2013 at 08:59:08AM -0700, Tony Kay wrote:\n> On Mon, Oct 14, 2013 at 4:42 PM, Tomas Vondra <[email protected]> wrote:\n> \n> > On 15.10.2013 01:00, Tony Kay wrote:\n> > > Hi,\n> > >\n> > > I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a\n> > > 16 Opteron 6276 CPU box. We limit connections to roughly 120, but\n> > > our webapp is configured to allocate a thread-local connection, so\n> > > those connections are rarely doing anything more than half the time.\n> >\n> > Lower your shared buffers to about 20% of your RAM, unless you've tested\n> > it's actually helping in your particular case. It's unlikely you'll get\n> > better performance by using more than that, especially on older\n> > versions, so it's wiser to leave the rest for page cache.\n> >\n> > It might even be one of the causes of the performance issue you're\n> > seeing, as shared buffers are not exactly overhead-free.\n> >\n> > See this for more details on tuning:\n> >\n> > http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n> \n> \n> I had followed the general directions from several sources years ago, which\n> indicate up to 40% of RAM. We've been running very large shared buffers for\n\nin general it's best to start with 10-15% of the RAM and no more then\n2-4 GB\n\n> 4 years now, but it is difficult to generate a good real load without\n> testing against users, so we have not felt the need to move it around. In\n> general, I don't tend to tinker with a setting that has been fine for this\n> long without good reason. I've been wanting to upgrade to the newer\n> mmap-based versions of pgsql, but was waiting to re-tune this when I did so.\n> \n> Why do you suspect that shared_buffers would cause the behavior I'm seeing?\n> \n\nfor two reasons:\n\n- some of the overhead of bgwriter and checkpoints is more or less linear \nin the size of shared_buffers, for example it could be possible that a\nlarge quantity of data could be dirty when a checkpoint occurs).\n\n- the OS cache is also being used for reads and writes, the larger\n shared_buffers is, the more you risk double buffering (same blocks \n in the OS cache and in the database buffer cache).\n\n> \n> >\n> >\n> > You're on a rather old 9.1.x version, BTW. The last version in this\n> > branch is 9.1.10 and there are some important security fixes (e.g. in\n> > 9.1.9). Not sure if there are any fixes relevant to the performance\n> > issue, though.\n> >\n> > An upgrade to 9.1.10 is planned.\n> \n> \n> > A few initial questions:\n> >\n> > * What OS are we dealing with?\n> >\n> \n> CentOS el6\n> \n> \n> >\n> > * So how many active connections are there on average (see\n> > pg_stat_activity for connections running queries)?\n> >\n> \n> about 40-60\n> \n> \n> >\n> > * How much data are we talking about? In total and in the imports?\n> >\n> \n> 80GB database. The imports are maybe 1-3 MB...often much smaller. 10k rows\n> would be a probably average.\n> \n> \n> >\n> > > We have been running smoothly for over a year on this configuration,\n> > > and recently started having huge CPU spikes that bring the system to\n> > > its knees. Given that it is a multiuser system, it has been quite\n> > > hard to pinpoint the exact cause, but I think we've narrowed it down\n> > > to two data import jobs that were running in semi-long transactions\n> > > (clusters of row inserts).\n> > >\n> > > The tables affected by these inserts are used in common queries.\n> > >\n> > > The imports will bring in a row count of perhaps 10k on average\n> > > covering 4 tables.\n> > >\n> > > The insert transactions are at isolation level read committed (the\n> > > default for the JDBC driver).\n> > >\n> > > When the import would run (again, theory...we have not been able to\n> > > reproduce), we would end up maxed out on CPU, with a load average of\n> > > 50 for 16 CPUs (our normal busy usage is a load average of 5 out of\n> > > 16 CPUs).\n> > >\n> > > When looking at the active queries, most of them are against the\n> > > tables that are affected by these imports.\n> >\n> > Which processes consume most CPU time? Are those backends executing the\n> > queries, or some background processes (checkpointer, autovacuum, ...)?\n> >\n> >\n> The backends executing the queries...most of the queries that seem hung\n> usually run in a few ms.\n> \n> \n> > Can you post a \"top -c\" output collected at the time of the CPU peak?\n> >\n> >\n> Don't have process accounting, so I cannot regenerate that; however, I can\n> tell you what queries were active at one of them.\n> \n> There were 36 of the queries agains table ind_event (which is one affected\n> by the import). Those queries usually take 5-10ms, and we never see more\n> than 2 active during normal operation. These had been active for\n> _minutes_....a sample of the running queries:\n> \n> time_active | datname | procpid | query\n> \n> -----------------+---------------------+---------+-------------------------------------------\n> 00:08:10.891105 | tudb | 9058 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:08:10.981845 | tudb | 8977 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:08.883347 | tudb | 8930 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:15.266393 | tudb | 8927 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:27.587133 | tudb | 11867 | update msg_result set\n> dt_result=$1,msg_id=\n> 00:08:06.458885 | tudb | 8912 | select * from\n> mr_uss_ind_event_x where org\n> 00:06:43.036487 | tudb | 8887 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:01.992367 | tudb | 8831 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:06:59.217721 | tudb | 8816 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:07.558848 | tudb | 8811 | update md_invoice set\n> unbilled_amt=unbille\n> 00:07:30.636192 | tudb | 8055 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:26.761053 | tudb | 8053 | update msg_result set\n> dt_result=$1,msg_id=\n> 00:06:46.021084 | tudb | 8793 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:26.412781 | tudb | 8041 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:43.315019 | tudb | 8031 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:42.651143 | tudb | 7990 | select * from\n> mr_uss_ind_event_x where org\n> 00:06:45.258232 | tudb | 7973 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:46.135027 | tudb | 7961 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:31.814513 | tudb | 7959 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:27.739959 | tudb | 8221 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:21.554369 | tudb | 8191 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:30.665133 | tudb | 7953 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:17.727028 | tudb | 7950 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:25.657611 | tudb | 7948 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:28.118856 | tudb | 7939 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:32.436705 | tudb | 7874 | insert into\n> mr_uss_ind_event (prelimtime_c\n> 00:08:12.090187 | tudb | 7873 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:19.181981 | tudb | 7914 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:07:04.234119 | tudb | 7909 | select * from\n> mr_uss_ind_event_x where (tu\n> 00:06:52.614609 | tudb | 7856 | select * from\n> mr_uss_ind_event_x where org\n> 00:07:18.667903 | tudb | 7908 | select * from\n> mr_uss_ind_event_x where (tu\n> \n> The insert listed there is coming from that import...the others are\n> querying a view that includes that table in a join.\n> \n> Also, try to collect a few snapshots of pg_stat_bgwriter catalog before\n> > and during the loads. Don't forget to include the timestamp:\n> >\n> > select now(), * from pg_stat_bgwriter;\n> >\n> >\n> This is a live production system, and it will take me some doing to\n> generate a load on a test server that triggers the condition. I'll be\n> certain to gather this and the other stats if I can trigger it.\n> \n> \n> > and when you're at it, pg_stat_database snapshots might be handy too\n> > (not sure if you're running a single database or multiple ones), so use\n> > either\n> >\n> > select now(), * from pg_stat_database;\n> >\n> > or\n> >\n> > select now(), * from pg_stat_database where datname = '..dbname..';\n> >\n> > That should give us at least some insight into what's happening.\n> >\n> > > Our workaround (that is holding at present) was to drop the\n> > > transactions on those imports (which is not optimal, but fortunately\n> > > is acceptable for this particular data). This workaround has\n> > > prevented any further incidents, but is of course inconclusive.\n> > >\n> > > Does this sound familiar to anyone, and if so, please advise.\n> >\n> > I'm wondering how this could be related to the transactions, and IIRC\n> > the stats (e.g. # of inserted rows) are sent at commit time. That might\n> > trigger the autovacuum. But without the transactions the autovacuum\n> > would be triggered sooner ...\n> >\n> > regards\n> > Tomas\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Oct 2013 19:26:23 +0200", "msg_from": "Julien Cigar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On Tue, Oct 15, 2013 at 10:26 AM, Julien Cigar <[email protected]> wrote:\n\n>\n> for two reasons:\n>\n> - some of the overhead of bgwriter and checkpoints is more or less linear\n> in the size of shared_buffers, for example it could be possible that a\n> large quantity of data could be dirty when a checkpoint occurs).\n>\n> - the OS cache is also being used for reads and writes, the larger\n> shared_buffers is, the more you risk double buffering (same blocks\n> in the OS cache and in the database buffer cache).\n>\n\nExcellent. Thank you for the information. My suspicion has always been that\nthe shared_buffers are \"level 1 cache\", so it seems to me that you'd want\nthat to be large enough to hold your entire database if you could. However,\nI'm now realizing that I was _also_ assuming the IPC shared memory was also\nbeing locked via mlock to prevent swapping it out, and I'm now getting the\nfeeling that this isn't true, which means the double buffering could lead\nto swap space use on buffer cache pressure...which I do occasionally see in\nways I had not expected.\n\nWe do streaming replication and also store them for snapshot PITR backups,\nso I am intimately familiar with our write load, and I can say it is pretty\nlow (we ship a 16MB WAL file about every 10-15 minutes during our busiest\ntimes).\n\nThat said, I can see how an import that is doing a bunch of writes could\npossibly spread those over a large area that would then consume a lot of\nCPU on the writer and checkpoint; however, I do not see how either of those\nwould cause 40-60 different postgres backgroud processes (all running a\nnormally \"light query\") to spin off into oblivion unless the write work\nload is somehow threaded into the background workers (which I'm sure it\nisn't). So, I think we're still dealing with a spinlock issue.\n\nWe're going to upgrade to 9.1.10 (with debug symbols) Thursday night and\nadd another 64GB of RAM. I'll tune shared_buffers down to 2GB at that time\nand bump effective_cache_size up at the same time. My large history of sar\ndata will make it apparent pretty quickly if that is a win/lose/tie.\n\nIf we have another spike in production, we'll be ready to measure it more\naccurately.\n\nThanks,\n\nTony\n\nOn Tue, Oct 15, 2013 at 10:26 AM, Julien Cigar <[email protected]> wrote:\n\nfor two reasons:\n\n- some of the overhead of bgwriter and checkpoints is more or less linear\nin the size of shared_buffers, for example it could be possible that a\nlarge quantity of data could be dirty when a checkpoint occurs).\n\n- the OS cache is also being used for reads and writes, the larger\n  shared_buffers is, the more you risk double buffering (same blocks\n  in the OS cache and in the database buffer cache).Excellent. Thank you for the information. My suspicion has always been that the shared_buffers are \"level 1 cache\", so it seems to me that you'd want that to be large enough to hold your entire database if you could. However, I'm now realizing that I was _also_ assuming the IPC shared memory was also being locked via mlock to prevent swapping it out, and I'm now getting the feeling that this isn't true, which means the double buffering could lead to swap space use on buffer cache pressure...which I do occasionally see in ways I had not expected.\nWe do streaming replication and also store them for snapshot PITR backups, so I am intimately familiar with our write load, and I can say it is pretty low (we ship a 16MB WAL file about every 10-15 minutes during our busiest times).\nThat said, I can see how an import that is doing a bunch of writes could possibly spread those over a large area that would then consume a lot of CPU on the writer and checkpoint; however, I do not see how either of those would cause 40-60 different postgres backgroud processes (all running a normally \"light query\") to spin off into oblivion unless the write work load is somehow threaded into the background workers (which I'm sure it isn't). So, I think we're still dealing with a spinlock issue.\nWe're going to upgrade to 9.1.10 (with debug symbols) Thursday night and add another 64GB of RAM. I'll tune shared_buffers down to 2GB at that time and bump effective_cache_size up at the same time. My large history of sar data will make it apparent pretty quickly if that is a win/lose/tie.\nIf we have another spike in production, we'll be ready to measure it more accurately.Thanks,\nTony", "msg_date": "Tue, 15 Oct 2013 12:07:38 -0700", "msg_from": "Tony Kay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On Tue, Oct 15, 2013 at 12:07:38PM -0700, Tony Kay wrote:\n> On Tue, Oct 15, 2013 at 10:26 AM, Julien Cigar <[email protected]> wrote:\n> \n> >\n> > for two reasons:\n> >\n> > - some of the overhead of bgwriter and checkpoints is more or less linear\n> > in the size of shared_buffers, for example it could be possible that a\n> > large quantity of data could be dirty when a checkpoint occurs).\n> >\n> > - the OS cache is also being used for reads and writes, the larger\n> > shared_buffers is, the more you risk double buffering (same blocks\n> > in the OS cache and in the database buffer cache).\n> >\n> \n> Excellent. Thank you for the information. My suspicion has always been that\n> the shared_buffers are \"level 1 cache\", so it seems to me that you'd want\n> that to be large enough to hold your entire database if you could. However,\n> I'm now realizing that I was _also_ assuming the IPC shared memory was also\n> being locked via mlock to prevent swapping it out, and I'm now getting the\n\non FreeBSD you can use kern.ipc.shm_use_phys=1 to lock shared memory\npages in core (note that it's useless on 9.3 as mmap is now used to \nallocate shared memory)\n\n(and BTW I'm curious is someone has done benchmarks on FreeBSD + 9.3 +\nmmap, because enabling kern.ipc.shm_use_phys leads to a 2-4x perf\nimprovement in some benchmarks)\n\n> feeling that this isn't true, which means the double buffering could lead\n> to swap space use on buffer cache pressure...which I do occasionally see in\n> ways I had not expected.\n\nyou can't avoid double buffering sometime, for example if a block is\nread from disk and has not been requested previously it will first go to\nthe OS cache and then to the buffer cache. In an ideal world block that\nare most frequently used should be in the database buffer cache, and\nothers in the OS cache.\n\n> \n> We do streaming replication and also store them for snapshot PITR backups,\n> so I am intimately familiar with our write load, and I can say it is pretty\n> low (we ship a 16MB WAL file about every 10-15 minutes during our busiest\n> times).\n> \n> That said, I can see how an import that is doing a bunch of writes could\n> possibly spread those over a large area that would then consume a lot of\n> CPU on the writer and checkpoint; however, I do not see how either of those\n> would cause 40-60 different postgres backgroud processes (all running a\n> normally \"light query\") to spin off into oblivion unless the write work\n> load is somehow threaded into the background workers (which I'm sure it\n> isn't). So, I think we're still dealing with a spinlock issue.\n> \n> We're going to upgrade to 9.1.10 (with debug symbols) Thursday night and\n> add another 64GB of RAM. I'll tune shared_buffers down to 2GB at that time\n> and bump effective_cache_size up at the same time. My large history of sar\n> data will make it apparent pretty quickly if that is a win/lose/tie.\n> \n> If we have another spike in production, we'll be ready to measure it more\n> accurately.\n> \n> Thanks,\n> \n> Tony\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Oct 2013 21:41:06 +0200", "msg_from": "Julien Cigar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On 15.10.2013 21:07, Tony Kay wrote:\n> \n> On Tue, Oct 15, 2013 at 10:26 AM, Julien Cigar <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> \n> for two reasons:\n> \n> - some of the overhead of bgwriter and checkpoints is more or less \n> linear in the size of shared_buffers, for example it could be \n> possible that a large quantity of data could be dirty when a \n> checkpoint occurs).\n> \n> - the OS cache is also being used for reads and writes, the larger \n> shared_buffers is, the more you risk double buffering (same blocks\n> in the OS cache and in the database buffer cache).\n> \n> \n> Excellent. Thank you for the information. My suspicion has always \n> been that the shared_buffers are \"level 1 cache\", so it seems to me \n> that you'd want that to be large enough to hold your entire database \n> if you could. However, I'm now realizing that I was _also_ assuming \n> the IPC shared memory was also being locked via mlock to prevent \n> swapping it out, and I'm now getting the feeling that this isn't \n> true, which means the double buffering could lead to swap space use \n> on buffer cache pressure...which I do occasionally see in ways I had \n> not expected.\n> \n> We do streaming replication and also store them for snapshot PITR \n> backups, so I am intimately familiar with our write load, and I can \n> say it is pretty low (we ship a 16MB WAL file about every 10-15 \n> minutes during our busiest times).\n> \n> That said, I can see how an import that is doing a bunch of writes \n> could possibly spread those over a large area that would then\n> consume a lot of CPU on the writer and checkpoint; however, I do not\n> see how either of those would cause 40-60 different postgres\n> backgroud processes (all running a normally \"light query\") to spin\n> off into oblivion unless the write work load is somehow threaded into\n> the background workers (which I'm sure it isn't). So, I think we're\n> still dealing with a spinlock issue.\n> \n> We're going to upgrade to 9.1.10 (with debug symbols) Thursday night \n> and add another 64GB of RAM. I'll tune shared_buffers down to 2GB at \n> that time and bump effective_cache_size up at the same time. My\n> large history of sar data will make it apparent pretty quickly if\n> that is a win/lose/tie.\n\nDon't be too aggressive, though. You haven't identified the bottleneck\nyet and 2GB might be too low. For example we're running 9.1.x too, and\nwe're generally quite happy with 10GB shared buffers (on machines with\n96GB of RAM). So although 22GB is definitely too much, but 2GB might be\ntoo low, especially if you add more RAM into the machine.\n\nWhat you may do is inspect the buffer cache with this contrib module:\n\n http://www.postgresql.org/docs/9.1/interactive/pgbuffercache.html\n\nDoing something as simple as this:\n\n select (reldatabase is not null), count(*)\n from pg_buffercache group by 1;\n\n select usagecount, count(*)\n from pg_buffercache where reldatabase is not null group by 1;\n\n select isdirty, count(*)\n from pg_buffercache where reldatabase is not null group by 1;\n\nshould tell you some very basic metrics, i.e. what portion of buffers\nyou actually use, what is the LRU usage count histogram and what portion\nof shared buffers is dirty.\n\nAgain, you'll have to run this repeatedly during the import job, to get\nan idea of what's going on and size the shared buffers reasonably for\nyour workload.\n\nBe careful - this needs to acquire some locks to get a consistent\nresult, so don't run that too frequently.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Oct 2013 23:46:03 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On Tue, Oct 15, 2013 at 12:26 PM, Julien Cigar <[email protected]> wrote:\n> On Tue, Oct 15, 2013 at 08:59:08AM -0700, Tony Kay wrote:\n>> On Mon, Oct 14, 2013 at 4:42 PM, Tomas Vondra <[email protected]> wrote:\n>>\n>> > On 15.10.2013 01:00, Tony Kay wrote:\n>> > > Hi,\n>> > >\n>> > > I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a\n>> > > 16 Opteron 6276 CPU box. We limit connections to roughly 120, but\n>> > > our webapp is configured to allocate a thread-local connection, so\n>> > > those connections are rarely doing anything more than half the time.\n>> >\n>> > Lower your shared buffers to about 20% of your RAM, unless you've tested\n>> > it's actually helping in your particular case. It's unlikely you'll get\n>> > better performance by using more than that, especially on older\n>> > versions, so it's wiser to leave the rest for page cache.\n>> >\n>> > It might even be one of the causes of the performance issue you're\n>> > seeing, as shared buffers are not exactly overhead-free.\n>> >\n>> > See this for more details on tuning:\n>> >\n>> > http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>>\n>>\n>> I had followed the general directions from several sources years ago, which\n>> indicate up to 40% of RAM. We've been running very large shared buffers for\n>\n> in general it's best to start with 10-15% of the RAM and no more then\n> 2-4 GB\n>\n>> 4 years now, but it is difficult to generate a good real load without\n>> testing against users, so we have not felt the need to move it around. In\n>> general, I don't tend to tinker with a setting that has been fine for this\n>> long without good reason. I've been wanting to upgrade to the newer\n>> mmap-based versions of pgsql, but was waiting to re-tune this when I did so.\n>>\n>> Why do you suspect that shared_buffers would cause the behavior I'm seeing?\n>>\n>\n> for two reasons:\n>\n> - some of the overhead of bgwriter and checkpoints is more or less linear\n> in the size of shared_buffers, for example it could be possible that a\n> large quantity of data could be dirty when a checkpoint occurs).\n>\n> - the OS cache is also being used for reads and writes, the larger\n> shared_buffers is, the more you risk double buffering (same blocks\n> in the OS cache and in the database buffer cache).\n\nThat's good reasoning but is not related to the problem faced by the\nOP. The real reason why I recommend to keep shared buffers at max\n2GB, always, is because we have major contention issues which we\npresume are in the buffer area (either in the mapping or in the clock\nsweep) but could be something else entirely. These issues tend to\nshow up on fast machines in all- or mostly- read workloads.\n\nWe are desperate for profiles demonstrating the problem in production\nworkloads. If OP is willing to install and run perf in production\n(which is not a bad idea anyways), then my advice is to change nothing\nuntil we have a chance to grab a profile. These types of problems are\nnotoriously difficult to reproduce in test environments.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Oct 2013 22:14:34 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "Hi,\n\nApologies for resurrecting this old thread, but it seems like this is\nbetter than starting a new conversation.\n\nWe are now running 9.1.13 and have doubled the CPU and memory. So 2x 16\nOpteron 6276 (32 cores total), and 64GB memory. shared_buffers set to 20G,\neffective_cache_size set to 40GB.\n\nWe were able to record perf data during the latest incident of high CPU\nutilization. perf report is below:\n\nSamples: 31M of event 'cycles', Event count (approx.): 16289978380877\n 44.74% postmaster [kernel.kallsyms] [k]\n_spin_lock_irqsave\n 15.03% postmaster postgres [.]\n0x00000000002ea937\n 3.14% postmaster postgres [.] s_lock\n\n 2.30% postmaster [kernel.kallsyms] [k]\ncompaction_alloc\n 2.21% postmaster postgres [.]\nHeapTupleSatisfiesMVCC\n 1.75% postmaster postgres [.]\nhash_search_with_hash_value\n 1.25% postmaster postgres [.]\nExecScanHashBucket\n 1.20% postmaster postgres [.] SHMQueueNext\n\n 1.05% postmaster postgres [.] slot_getattr\n\n 1.04% init [kernel.kallsyms] [k]\nnative_safe_halt\n 0.73% postmaster postgres [.] LWLockAcquire\n\n 0.59% postmaster [kernel.kallsyms] [k] page_fault\n\n 0.52% postmaster postgres [.] ExecQual\n\n 0.40% postmaster postgres [.] ExecStoreTuple\n\n 0.38% postmaster postgres [.] ExecScan\n\n 0.37% postmaster postgres [.]\ncheck_stack_depth\n 0.35% postmaster postgres [.] SearchCatCache\n\n 0.35% postmaster postgres [.]\nCheckForSerializableConflictOut\n 0.34% postmaster postgres [.] LWLockRelease\n\n 0.30% postmaster postgres [.] _bt_checkkeys\n\n 0.28% postmaster libc-2.12.so [.] memcpy\n\n 0.27% postmaster [kernel.kallsyms] [k]\nget_pageblock_flags_group\n 0.27% postmaster postgres [.] int4eq\n\n 0.27% postmaster postgres [.]\nheap_page_prune_opt\n 0.27% postmaster postgres [.]\npgstat_init_function_usage\n 0.26% postmaster [kernel.kallsyms] [k] _spin_lock\n\n 0.25% postmaster postgres [.] _bt_compare\n\n 0.24% postmaster postgres [.]\npgstat_end_function_usage\n\n...please let me know if we need to produce the report differently to be\nuseful.\n\nWe will begin reducing shared_buffers incrementally over the coming days.\n\n\nDave Owens\n\n541-359-2602\nTU Facebook<https://app.getsignals.com/link?url=https%3A%2F%2Fwww.facebook.com%2Fteamunify&ukey=agxzfnNpZ25hbHNjcnhyGAsSC1VzZXJQcm9maWxlGICAgOCP-IMLDA&k=179943a8-e0fa-494a-f79a-f86a69d3abdc>\n | Free OnDeck Mobile\nApps<https://app.getsignals.com/link?url=http%3A%2F%2Fwww.teamunify.com%2F__corp__%2Fondeck%2F&ukey=agxzfnNpZ25hbHNjcnhyGAsSC1VzZXJQcm9maWxlGICAgOCP-IMLDA&k=504a29f5-3415-405c-d550-195aa1ca1ee3>\n\n\n\nOn Tue, Oct 15, 2013 at 8:14 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Tue, Oct 15, 2013 at 12:26 PM, Julien Cigar <[email protected]> wrote:\n> > On Tue, Oct 15, 2013 at 08:59:08AM -0700, Tony Kay wrote:\n> >> On Mon, Oct 14, 2013 at 4:42 PM, Tomas Vondra <[email protected]> wrote:\n> >>\n> >> > On 15.10.2013 01:00, Tony Kay wrote:\n> >> > > Hi,\n> >> > >\n> >> > > I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a\n> >> > > 16 Opteron 6276 CPU box. We limit connections to roughly 120, but\n> >> > > our webapp is configured to allocate a thread-local connection, so\n> >> > > those connections are rarely doing anything more than half the time.\n> >> >\n> >> > Lower your shared buffers to about 20% of your RAM, unless you've\n> tested\n> >> > it's actually helping in your particular case. It's unlikely you'll\n> get\n> >> > better performance by using more than that, especially on older\n> >> > versions, so it's wiser to leave the rest for page cache.\n> >> >\n> >> > It might even be one of the causes of the performance issue you're\n> >> > seeing, as shared buffers are not exactly overhead-free.\n> >> >\n> >> > See this for more details on tuning:\n> >> >\n> >> > http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n> >>\n> >>\n> >> I had followed the general directions from several sources years ago,\n> which\n> >> indicate up to 40% of RAM. We've been running very large shared buffers\n> for\n> >\n> > in general it's best to start with 10-15% of the RAM and no more then\n> > 2-4 GB\n> >\n> >> 4 years now, but it is difficult to generate a good real load without\n> >> testing against users, so we have not felt the need to move it around.\n> In\n> >> general, I don't tend to tinker with a setting that has been fine for\n> this\n> >> long without good reason. I've been wanting to upgrade to the newer\n> >> mmap-based versions of pgsql, but was waiting to re-tune this when I\n> did so.\n> >>\n> >> Why do you suspect that shared_buffers would cause the behavior I'm\n> seeing?\n> >>\n> >\n> > for two reasons:\n> >\n> > - some of the overhead of bgwriter and checkpoints is more or less linear\n> > in the size of shared_buffers, for example it could be possible that a\n> > large quantity of data could be dirty when a checkpoint occurs).\n> >\n> > - the OS cache is also being used for reads and writes, the larger\n> > shared_buffers is, the more you risk double buffering (same blocks\n> > in the OS cache and in the database buffer cache).\n>\n> That's good reasoning but is not related to the problem faced by the\n> OP. The real reason why I recommend to keep shared buffers at max\n> 2GB, always, is because we have major contention issues which we\n> presume are in the buffer area (either in the mapping or in the clock\n> sweep) but could be something else entirely. These issues tend to\n> show up on fast machines in all- or mostly- read workloads.\n>\n> We are desperate for profiles demonstrating the problem in production\n> workloads. If OP is willing to install and run perf in production\n> (which is not a bad idea anyways), then my advice is to change nothing\n> until we have a chance to grab a profile. These types of problems are\n> notoriously difficult to reproduce in test environments.\n>\n> merlin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi,Apologies for resurrecting this old thread, but it seems like this is better than starting a new conversation.We are now running 9.1.13 and have doubled the CPU and memory.  So 2x 16 Opteron 6276 (32 cores total), and 64GB memory.  shared_buffers set to 20G, effective_cache_size set to 40GB.\nWe were able to record perf data during the latest incident of high CPU utilization. perf report is below:\nSamples: 31M of event 'cycles', Event count (approx.): 16289978380877 \n\n 44.74%       postmaster  [kernel.kallsyms]             [k] _spin_lock_irqsave                                      15.03%       postmaster  postgres                      [.] 0x00000000002ea937                                     \n  3.14%       postmaster  postgres                      [.] s_lock                                                   2.30%       postmaster  [kernel.kallsyms]             [k] compaction_alloc                                       \n  2.21%       postmaster  postgres                      [.] HeapTupleSatisfiesMVCC                                   1.75%       postmaster  postgres                      [.] hash_search_with_hash_value                            \n  1.25%       postmaster  postgres                      [.] ExecScanHashBucket                                       1.20%       postmaster  postgres                      [.] SHMQueueNext                                           \n  1.05%       postmaster  postgres                      [.] slot_getattr                                             1.04%             init  [kernel.kallsyms]             [k] native_safe_halt                                       \n  0.73%       postmaster  postgres                      [.] LWLockAcquire                                            0.59%       postmaster  [kernel.kallsyms]             [k] page_fault                                             \n  0.52%       postmaster  postgres                      [.] ExecQual                                                 0.40%       postmaster  postgres                      [.] ExecStoreTuple                                         \n  0.38%       postmaster  postgres                      [.] ExecScan                                                 0.37%       postmaster  postgres                      [.] check_stack_depth                                      \n  0.35%       postmaster  postgres                      [.] SearchCatCache                                           0.35%       postmaster  postgres                      [.] CheckForSerializableConflictOut                        \n  0.34%       postmaster  postgres                      [.] LWLockRelease                                            0.30%       postmaster  postgres                      [.] _bt_checkkeys                                          \n  0.28%       postmaster  libc-2.12.so                  [.] memcpy                                                   0.27%       postmaster  [kernel.kallsyms]             [k] get_pageblock_flags_group                              \n  0.27%       postmaster  postgres                      [.] int4eq                                                   0.27%       postmaster  postgres                      [.] heap_page_prune_opt                                    \n  0.27%       postmaster  postgres                      [.] pgstat_init_function_usage                               0.26%       postmaster  [kernel.kallsyms]             [k] _spin_lock                                             \n  0.25%       postmaster  postgres                      [.] _bt_compare                                              0.24%       postmaster  postgres                      [.] pgstat_end_function_usage\n...please let me know if we need to produce the report differently to be useful.We will begin reducing shared_buffers incrementally over the coming days.\nDave Owens541-359-2602TU Facebook | Free OnDeck Mobile Apps\n\nOn Tue, Oct 15, 2013 at 8:14 PM, Merlin Moncure <[email protected]> wrote:\nOn Tue, Oct 15, 2013 at 12:26 PM, Julien Cigar <[email protected]> wrote:\n> On Tue, Oct 15, 2013 at 08:59:08AM -0700, Tony Kay wrote:\n>> On Mon, Oct 14, 2013 at 4:42 PM, Tomas Vondra <[email protected]> wrote:\n>>\n>> > On 15.10.2013 01:00, Tony Kay wrote:\n>> > > Hi,\n>> > >\n>> > > I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a\n>> > > 16 Opteron 6276 CPU box. We limit connections to roughly 120, but\n>> > > our webapp is configured to allocate a thread-local connection, so\n>> > > those connections are rarely doing anything more than half the time.\n>> >\n>> > Lower your shared buffers to about 20% of your RAM, unless you've tested\n>> > it's actually helping in your particular case. It's unlikely you'll get\n>> > better performance by using more than that, especially on older\n>> > versions, so it's wiser to leave the rest for page cache.\n>> >\n>> > It might even be one of the causes of the performance issue you're\n>> > seeing, as shared buffers are not exactly overhead-free.\n>> >\n>> > See this for more details on tuning:\n>> >\n>> >    http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>>\n>>\n>> I had followed the general directions from several sources years ago, which\n>> indicate up to 40% of RAM. We've been running very large shared buffers for\n>\n> in general it's best to start with 10-15% of the RAM and no more then\n> 2-4 GB\n>\n>> 4 years now, but it is difficult to generate a good real load without\n>> testing against users, so we have not felt the need to move it around. In\n>> general, I don't tend to tinker with a setting that has been fine for this\n>> long without good reason. I've been wanting to upgrade to the newer\n>> mmap-based versions of pgsql, but was waiting to re-tune this when I did so.\n>>\n>> Why do you suspect that shared_buffers would cause the behavior I'm seeing?\n>>\n>\n> for two reasons:\n>\n> - some of the overhead of bgwriter and checkpoints is more or less linear\n> in the size of shared_buffers, for example it could be possible that a\n> large quantity of data could be dirty when a checkpoint occurs).\n>\n> - the OS cache is also being used for reads and writes, the larger\n>   shared_buffers is, the more you risk double buffering (same blocks\n>   in the OS cache and in the database buffer cache).\n\nThat's good reasoning but is not related to the problem faced by the\nOP.  The real reason why I recommend to keep shared buffers at max\n2GB, always, is because we have major contention issues which we\npresume are in the buffer area (either in the mapping or in the clock\nsweep) but could be something else entirely.  These issues tend to\nshow up on fast machines in all- or mostly- read workloads.\n\nWe are desperate for profiles demonstrating the problem in production\nworkloads.  If OP is willing to install and run perf in production\n(which is not a bad idea anyways), then my advice is to change nothing\nuntil we have a chance to grab a profile.  These types of problems are\nnotoriously difficult to reproduce in test environments.\n\nmerlin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 13 May 2014 16:04:50 -0700", "msg_from": "Dave Owens <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On Tue, May 13, 2014 at 6:04 PM, Dave Owens <[email protected]> wrote:\n\n> Hi,\n>\n> Apologies for resurrecting this old thread, but it seems like this is\n> better than starting a new conversation.\n>\n> We are now running 9.1.13 and have doubled the CPU and memory. So 2x 16\n> Opteron 6276 (32 cores total), and 64GB memory. shared_buffers set to 20G,\n> effective_cache_size set to 40GB.\n>\n> We were able to record perf data during the latest incident of high CPU\n> utilization. perf report is below:\n>\n> Samples: 31M of event 'cycles', Event count (approx.): 16289978380877\n> 44.74% postmaster [kernel.kallsyms] [k]\n> _spin_lock_irqsave\n> 15.03% postmaster postgres [.]\n> 0x00000000002ea937\n> 3.14% postmaster postgres [.] s_lock\n>\n> 2.30% postmaster [kernel.kallsyms] [k]\n> compaction_alloc\n> 2.21% postmaster postgres [.]\n> HeapTupleSatisfiesMVCC\n> 1.75% postmaster postgres [.]\n> hash_search_with_hash_value\n> 1.25% postmaster postgres [.]\n> ExecScanHashBucket\n> 1.20% postmaster postgres [.] SHMQueueNext\n>\n> 1.05% postmaster postgres [.] slot_getattr\n>\n> 1.04% init [kernel.kallsyms] [k]\n> native_safe_halt\n> 0.73% postmaster postgres [.] LWLockAcquire\n>\n> 0.59% postmaster [kernel.kallsyms] [k] page_fault\n>\n> 0.52% postmaster postgres [.] ExecQual\n>\n> 0.40% postmaster postgres [.] ExecStoreTuple\n>\n> 0.38% postmaster postgres [.] ExecScan\n>\n> 0.37% postmaster postgres [.]\n> check_stack_depth\n> 0.35% postmaster postgres [.] SearchCatCache\n>\n> 0.35% postmaster postgres [.]\n> CheckForSerializableConflictOut\n> 0.34% postmaster postgres [.] LWLockRelease\n>\n> 0.30% postmaster postgres [.] _bt_checkkeys\n>\n> 0.28% postmaster libc-2.12.so [.] memcpy\n>\n> 0.27% postmaster [kernel.kallsyms] [k]\n> get_pageblock_flags_group\n> 0.27% postmaster postgres [.] int4eq\n>\n> 0.27% postmaster postgres [.]\n> heap_page_prune_opt\n> 0.27% postmaster postgres [.]\n> pgstat_init_function_usage\n> 0.26% postmaster [kernel.kallsyms] [k] _spin_lock\n>\n> 0.25% postmaster postgres [.] _bt_compare\n>\n> 0.24% postmaster postgres [.]\n> pgstat_end_function_usage\n>\n> ...please let me know if we need to produce the report differently to be\n> useful.\n>\n> We will begin reducing shared_buffers incrementally over the coming days.\n>\n\n\nThis is definitely pointing at THP compaction which is increasingly\nemerging as a possible culprit for suddenly occurring (and just as suddenly\nresolving) cpu spikes. The evidence I see is:\n\n*) Lots of time in kernel\n*) \"compaction_alloc\"\n*) otherwise normal postgres profile (not lots of time in s_lock, LWLock,\nor other weird things)\n\n\nPlease check the value of THP (see here:\nhttp://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadaoop-workloads/)\nand various other workloads. If it is enabled consider disabling\nit...this will revert to pre linux 6 behavior. If you are going to attack\nthis from the point of view of lowering shared buffers, do not bother with\nincremental...head straight for 2GB or it's unlikely the problem will be\nfixed. THP compaction is not a postgres problem...mysql is affected as is\nother server platforms. If THP is indeed causing the problem, it couldn't\nhurt to get on the horn withe linux guys. Last I heard they claimed this\nkind of thing was fixed but I don't know where things stand now.\n\nmerlin\n\nOn Tue, May 13, 2014 at 6:04 PM, Dave Owens <[email protected]> wrote:\nHi,Apologies for resurrecting this old thread, but it seems like this is better than starting a new conversation.\nWe are now running 9.1.13 and have doubled the CPU and memory.  So 2x 16 Opteron 6276 (32 cores total), and 64GB memory.  shared_buffers set to 20G, effective_cache_size set to 40GB.\nWe were able to record perf data during the latest incident of high CPU utilization. perf report is below:\nSamples: 31M of event 'cycles', Event count (approx.): 16289978380877 \r\n\r\n\r\n 44.74%       postmaster  [kernel.kallsyms]             [k] _spin_lock_irqsave                                      15.03%       postmaster  postgres                      [.] 0x00000000002ea937                                     \n  3.14%       postmaster  postgres                      [.] s_lock                                                   2.30%       postmaster  [kernel.kallsyms]             [k] compaction_alloc                                       \n  2.21%       postmaster  postgres                      [.] HeapTupleSatisfiesMVCC                                   1.75%       postmaster  postgres                      [.] hash_search_with_hash_value                            \n  1.25%       postmaster  postgres                      [.] ExecScanHashBucket                                       1.20%       postmaster  postgres                      [.] SHMQueueNext                                           \n  1.05%       postmaster  postgres                      [.] slot_getattr                                             1.04%             init  [kernel.kallsyms]             [k] native_safe_halt                                       \n  0.73%       postmaster  postgres                      [.] LWLockAcquire                                            0.59%       postmaster  [kernel.kallsyms]             [k] page_fault                                             \n  0.52%       postmaster  postgres                      [.] ExecQual                                                 0.40%       postmaster  postgres                      [.] ExecStoreTuple                                         \n  0.38%       postmaster  postgres                      [.] ExecScan                                                 0.37%       postmaster  postgres                      [.] check_stack_depth                                      \n  0.35%       postmaster  postgres                      [.] SearchCatCache                                           0.35%       postmaster  postgres                      [.] CheckForSerializableConflictOut                        \n  0.34%       postmaster  postgres                      [.] LWLockRelease                                            0.30%       postmaster  postgres                      [.] _bt_checkkeys                                          \n  0.28%       postmaster  libc-2.12.so                  [.] memcpy                                                   0.27%       postmaster  [kernel.kallsyms]             [k] get_pageblock_flags_group                              \n  0.27%       postmaster  postgres                      [.] int4eq                                                   0.27%       postmaster  postgres                      [.] heap_page_prune_opt                                    \n  0.27%       postmaster  postgres                      [.] pgstat_init_function_usage                               0.26%       postmaster  [kernel.kallsyms]             [k] _spin_lock                                             \n  0.25%       postmaster  postgres                      [.] _bt_compare                                              0.24%       postmaster  postgres                      [.] pgstat_end_function_usage\n...please let me know if we need to produce the report differently to be useful.We will begin reducing shared_buffers incrementally over the coming days.\nThis is definitely pointing at THP compaction which is increasingly emerging as a possible culprit for suddenly occurring (and just as suddenly resolving) cpu spikes.  The evidence I see is:\n*) Lots of time in kernel*) \"compaction_alloc\"\n*) otherwise normal postgres profile (not lots of time in s_lock, LWLock, or other weird things)\nPlease check the value of THP (see here: http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadaoop-workloads/) and various other workloads.   If it is enabled consider disabling it...this will revert to pre linux 6 behavior.  If you are going to attack this from the point of view of lowering shared buffers, do not bother with incremental...head straight for 2GB or it's unlikely the problem will be fixed.   THP compaction is not a postgres problem...mysql is affected as is other server platforms.  If THP is indeed causing the problem, it couldn't hurt to get on the horn withe linux guys.  Last I heard they claimed this kind of thing was fixed but I don't know where things stand now.\nmerlin", "msg_date": "Wed, 14 May 2014 08:48:22 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" }, { "msg_contents": "On Tue, May 13, 2014 at 4:04 PM, Dave Owens <[email protected]> wrote:\n\n> Hi,\n>\n> Apologies for resurrecting this old thread, but it seems like this is\n> better than starting a new conversation.\n>\n> We are now running 9.1.13 and have doubled the CPU and memory. So 2x 16\n> Opteron 6276 (32 cores total), and 64GB memory. shared_buffers set to 20G,\n> effective_cache_size set to 40GB.\n>\n> We were able to record perf data during the latest incident of high CPU\n> utilization. perf report is below:\n>\n> Samples: 31M of event 'cycles', Event count (approx.): 16289978380877\n> 44.74% postmaster [kernel.kallsyms] [k]\n> _spin_lock_irqsave\n> 15.03% postmaster postgres [.]\n> 0x00000000002ea937\n> 3.14% postmaster postgres [.] s_lock\n>\n> 2.30% postmaster [kernel.kallsyms] [k]\n> compaction_alloc\n> 2.21% postmaster postgres [.]\n> HeapTupleSatisfiesMVCC\n>\n\n\ncompaction_alloc points to \"transparent huge pages\" kernel problem,\nwhile HeapTupleSatisfiesMVCC\npoints to the problem with each backend taking a ProcArrayLock for every\nnot-yet-committed tuple it encounters. I don't know which of those leads\nto the _spin_lock_irqsave. It seems more likely to be transparent huge\npages that does that, but perhaps both of them do.\n\nIf it is the former, you can find other message on this list about\ndisabling it. If it is the latter, your best bet is to commit your bulk\ninserts as soon as possible (this might be improved for 9.5, if we can\nfigure out how to test the alternatives). Please let us know what works.\n\nIf lowering shared_buffers works, I wonder if disabling the transparent\nhuge page compaction issue might let you bring shared_buffers back up\nagain.\n\n\nCheers,\n\nJeff\n\nOn Tue, May 13, 2014 at 4:04 PM, Dave Owens <[email protected]> wrote:\nHi,Apologies for resurrecting this old thread, but it seems like this is better than starting a new conversation.\nWe are now running 9.1.13 and have doubled the CPU and memory.  So 2x 16 Opteron 6276 (32 cores total), and 64GB memory.  shared_buffers set to 20G, effective_cache_size set to 40GB.\nWe were able to record perf data during the latest incident of high CPU utilization. perf report is below:\nSamples: 31M of event 'cycles', Event count (approx.): 16289978380877 \n\n\n\n\n 44.74%       postmaster  [kernel.kallsyms]             [k] _spin_lock_irqsave                                      15.03%       postmaster  postgres                      [.] 0x00000000002ea937                                     \n  3.14%       postmaster  postgres                      [.] s_lock                                                   2.30%       postmaster  [kernel.kallsyms]             [k] compaction_alloc                                       \n  2.21%       postmaster  postgres                      [.] HeapTupleSatisfiesMVCC                                 compaction_alloc points to \"transparent huge pages\" kernel problem, while HeapTupleSatisfiesMVCC points to the problem with each backend taking a ProcArrayLock for every not-yet-committed tuple it encounters.  I don't know which of those leads to the _spin_lock_irqsave.  It seems more likely to be transparent huge pages that does that, but perhaps both of them do.\nIf it is the former, you can find other message on this list about disabling it.  If it is the latter, your best bet is to commit your bulk inserts as soon as possible (this might be improved for 9.5, if we can figure out how to test the alternatives). Please let us know what works.  \nIf lowering shared_buffers works, I wonder if disabling the transparent huge page compaction issue might let you bring shared_buffers back up again.  \nCheers,Jeff", "msg_date": "Wed, 14 May 2014 09:28:30 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU spikes and transactions" } ]
[ { "msg_contents": "Hi, This is my first post ever in the Postgres forum. I am relatively new to\nPostgres and coming from oracle background. We have hot stand by setup to\nserve mainly the read only queries. Past few days we have been facing a\nperformance issues on one of the transaction search. The search mainly\nutilizes 3 of our biggest transaction tables. We had recently crash on both\nprimary and standby because of the space issues. Both servers were brought\nup and running successfully after that incident. The standby is in almost in\nsync with primary, far behind by less than a second. I also rebuilt all the\nmajor indexes on the primary. I have done some research work to address the\nissue as following. (1) I checked most of the database parameters settings\nand they are same on both primary and standby, except some specific to the\nindividual server. (2) Checked the explain plan for the offending query and\nthey are exactly same on both the servers. Checked cpu usage on unix box and\nfound it was quite low. (3) The load on standby does not seem to be issue,\nbecause with absolutely no load the query takes long and most of the time\nreturned with the conflict error. (4) The hardware settings are exactly same\non both primary and secondary. (5) The same query executes very fast on\nprimary (6) After we recovered standby it was fine for few weeks and then\nagain started slowing down. I believe autovacuum and analyze does not need\nto be run on standby as it inherits that from primary. Please correct me if\nI am wrong. Any help or suggestion would be greatly appreciated. Thanks, \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nHi, \n\nThis is my first post ever in the Postgres forum. I am relatively new to Postgres and coming from oracle background. \n\nWe have hot stand by setup to serve mainly the read only queries. \n\nPast few days we have been facing a performance issues on one of the transaction search. The search mainly utilizes 3 of our biggest transaction tables. We had recently crash on both primary and standby because of the space issues. \n\n\nBoth servers were brought up and running successfully after that incident. The standby is in almost in sync with primary, far behind by less than a second. I also rebuilt all the major indexes on the primary. \n\nI have done some research work to address the issue as following. \n\n(1) I checked most of the database parameters settings and they are same on both primary and standby, except some specific to the individual server. \n(2) Checked the explain plan for the offending query and they are exactly same on both the servers. Checked cpu usage on unix box and found it was quite low. \n(3) The load on standby does not seem to be issue, because with absolutely no load the query takes long and most of the time returned with the conflict error. \n(4) The hardware settings are exactly same on both primary and secondary. \n(5) The same query executes very fast on primary \n(6) After we recovered standby it was fine for few weeks and then again started slowing down. \n\nI believe autovacuum and analyze does not need to be run on standby as it inherits that from primary. Please correct me if I am wrong. \n\nAny help or suggestion would be greatly appreciated. \n\nThanks, \n\n\t\n\t\n\t\n\nView this message in context: Hot Standby performance issue\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.", "msg_date": "Tue, 15 Oct 2013 13:40:26 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Hot Standby performance issue" }, { "msg_contents": "Anybody has any idea, or pointer ? This is a high priority issue I have\nresolve at work. Any help would be of great help.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775103.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 18 Oct 2013 14:49:24 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "http://www.postgresql.org/docs/current/static/hot-standby.html#HOT-STANDBY-CAVEATS\n\n\n\nOn Fri, Oct 18, 2013 at 11:49 PM, sparikh <[email protected]> wrote:\n\n> Anybody has any idea, or pointer ? This is a high priority issue I have\n> resolve at work. Any help would be of great help.\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775103.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nhttp://www.postgresql.org/docs/current/static/hot-standby.html#HOT-STANDBY-CAVEATS\nOn Fri, Oct 18, 2013 at 11:49 PM, sparikh <[email protected]> wrote:\nAnybody has any idea, or pointer ? This is a high priority issue I have\nresolve at work. Any help would be of great help.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775103.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 19 Oct 2013 01:15:58 +0200", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 15.10.2013 22:40, sparikh wrote:\n> Hi, This is my first post ever in the Postgres forum. I am\n> relatively new to Postgres and coming from oracle background.\n> \n> We have hot stand by setup to serve mainly the read only queries. \n> Past few days we have been facing a performance issues on one of the\n> transaction search. The search mainly utilizes 3 of our biggest \n> transaction tables.\n\nWhat do you mean by \"transaction search\"?\n\n> \n> We had recently crash on both primary and standby because of the\n> space issues. Both servers were brought up and running successfully\n> after that incident. The standby is in almost in sync with primary,\n> far behind by less than a second. I also rebuilt all the major\n> indexes on the primary. I have done some research work to address the\n> issue as following.\n> \n> (1) I checked most of the database parameters settings and they are \n> same on both primary and standby, except some specific to the \n> individual server.\n\nSo, what are the basic paremeters?\n\nAnd what PostgreSQL version / OS / hw are we dealing with?\n\n> (2) Checked the explain plan for the offending query and they are \n> exactly same on both the servers. Checked cpu usage on unix box and \n> found it was quite low.\n\nWell, that's hardly useful as you haven't provided the query not the\nexplain plan. Have you tried EXPLAIN ANALYZE?\n\n> (3) The load on standby does not seem to be issue, because with \n> absolutely no load the query takes long and most of the time\n> returned with the conflict error.\n\nNot suse I understand this. Are you saying that the standby is mostly\nidle, i.e. the query seems to be stuck, and then fails with conflict\nerror most of the time?\n\n> (4) The hardware settings are exactly same on both primary and \n> secondary.\n\nSo what are these hardware settings? BTW do you have some stats from the\nOS (CPU / IO / memory) collected at the time of the performance issue?\n\n> (5) The same query executes very fast on primary\n\nQuery? Explain analyze?\n\n> (6) After we recovered standby it was fine for few weeks and then \n> again started slowing down.\n\nWas it slowing down gradually, or did it start failing suddenly?\n\n> I believe autovacuum and analyze does not need to be run on standby \n> as it inherits that from primary. Please correct me if I am wrong.\n\nAny commands that would modify the database (including vacuum and\nautovacuum) are disabled on the standby.\n\n> Any help or suggestion would be greatly appreciated. Thanks,\n\nPlease, post as much details as possible. This reports contains pretty\nmuch no such details - query, explain or explain analyze, info about the\nsettings / hardware etc.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 19 Oct 2013 01:19:41 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 18.10.2013 23:49, sparikh wrote:\n> Anybody has any idea, or pointer ? This is a high priority issue I\n> have resolve at work. Any help would be of great help.\n\nTo help you we really need much more specific report. The one you posted\ncontains no details whatsoever - it doesn't even mention what version of\nPostgreSQL you use, on what OS or the query.\n\nPlease, provide substantially more details. We can't really help you\nwithout it.\n\nregards\nTomas\n\nPS: Please, format your messages reasonably, i.e. not one huge paragraph\nof text.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 19 Oct 2013 01:23:22 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Hi Tomas,\n\nThanks so much for your response and sorry for not providing the enough\ndetails.\n\nI have attached the zip file which has query,explain plan and database\nparameter settings for both primary and secondary.\n\nPlease note that query has multiple unions only the first query on top is\ncausing the performance issue.\n\nTransaction search is one of the feature in our Admin user interface(web\nportal) where user can search for the transactions against our OLTP\ndatabase. The attached query is generated dynamically by the application.\n\n> (3) The load on standby does not seem to be issue, because with \n> absolutely no load the query takes long and most of the time \n> returned with the conflict error. \n\nNot suse I understand this. Are you saying that the standby is mostly \nidle, i.e. the query seems to be stuck, and then fails with conflict \nerror most of the time? \n\nThe standby is not idle all the time. What I meant was even with no user\nactivity or no active user sessions, if I issue the query directly from\npgadmin tool it takes for ever. \n\nHardware settings both primary and secondary :\n===================================\n\nRed Hat Enterprise Linux Server release 5.5 (Tikanga)\nLinux 2.6.18-194.26.1.el5 x86_64\n4 CPUs\n16 GB RAM\nIntel Xeon\n\nPostgresql Version:\n================= \n \"PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2\n20080704 (Red Hat 4.1.2-51), 64-bit\"\n\n6) After we recovered standby it was fine for few weeks and then \n> again started slowing down. \n\nWas it slowing down gradually, or did it start failing suddenly? \n\nHonestly speaking I do not exactly, when users started reporting the issue I\nstarted looking into it. But the performance was good in September and\nsomewhere in October it started slowing down. I guess it was gradual. There\nwere no code change in the application or major change in the data volume. \n\nHope this helps. Please let me know if you need any other details.\n\nThanks Again.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775123.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 18 Oct 2013 17:53:36 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Hi,\n\nOn 19.10.2013 02:53, sparikh wrote:\n> Hi Tomas,\n> \n> Thanks so much for your response and sorry for not providing the enough\n> details.\n> \n> I have attached the zip file which has query,explain plan and database\n> parameter settings for both primary and secondary.\n\nI see no attachment, so it got lost somehow. Can you post the query on\nexplain.depesz.com and the query inline? Assuming it's not a horror-like\nstory produced by some ORM, in that case an attachment is probably\nappropriate, not to cause hear-attack to casual readers.\n\n> Please note that query has multiple unions only the first query on top is\n> causing the performance issue.\n\nSo, if you run only the problematic part, is it faster / slower. If it's\nequally slow then we can deal with a simpler query, right?\n\n> Transaction search is one of the feature in our Admin user interface(web\n> portal) where user can search for the transactions against our OLTP\n> database. The attached query is generated dynamically by the application.\n\nI.e. it's a form in the UI, and the application generates query matching\nsome fields in a form.\n\n>> (3) The load on standby does not seem to be issue, because with \n>> absolutely no load the query takes long and most of the time \n>> returned with the conflict error. \n> \n> Not suse I understand this. Are you saying that the standby is mostly \n> idle, i.e. the query seems to be stuck, and then fails with conflict \n> error most of the time? \n> \n> The standby is not idle all the time. What I meant was even with no user\n> activity or no active user sessions, if I issue the query directly from\n> pgadmin tool it takes for ever. \n\nIIRC I was unable to parse your description of what's happening on the\nstandby (might be my fault, as I'm not a native speaker).\n\nI'm still not sure whether the query is stuck or only processing the\ndata very slowly. Anyway, instead of describing what's happening, could\nyou collect some data using vmstat/iostat and post it here? Something like\n\n vmstat 1\n\nand\n\n iostat -x -k 1\n\ncollected while executing the query. Give us ~15-30 seconds of data for\neach, depending of how much it fluctuates. Another option is to use\n'perf' to monitor the backend executing your query. Something like\n\n perf record -g -a -p PID\n\nfor a reasonable time, and then 'perf report' (there's like a zillion of\noptions available for perf, but this should give you some idea where\nmost of the time is spent).\n\nIf you have strong stomach, you might even use strace ...\n\n> Hardware settings both primary and secondary :\n> ===================================\n> \n> Red Hat Enterprise Linux Server release 5.5 (Tikanga)\n> Linux 2.6.18-194.26.1.el5 x86_64\n> 4 CPUs\n> 16 GB RAM\n> Intel Xeon\n\nOK. But that's only the HW+OS. What about the basic database parameters\nthat you have mentioned checking? Say, shared buffers, work mem and such\nthings?\n\n> Postgresql Version:\n> ================= \n> \"PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2\n> 20080704 (Red Hat 4.1.2-51), 64-bit\"\n\nAny particular reason why you're running a minor version that's 2 years\nold and there were numerous bugs fixed since then? Including a serious\nsecurity issue in 9.1.9.\n\nPlease, plan an update to 9.1.10 ASAP. It might easily be the case that\nyou're hitting a bug that was already fixed a long time ago.\n\n> 6) After we recovered standby it was fine for few weeks and then \n>> again started slowing down. \n> \n> Was it slowing down gradually, or did it start failing suddenly? \n> \n> Honestly speaking I do not exactly, when users started reporting the issue I\n> started looking into it. But the performance was good in September and\n> somewhere in October it started slowing down. I guess it was gradual. There\n> were no code change in the application or major change in the data volume. \n\nIMHO that's pretty important fact. If your monitoring or (at least) logs\ncan't give you an answer, then you should improve that. At least start\nlogging slow queries and deploy at least some very basic monitoring\n(e.g. Munin is quite simple and the PostgreSQL plugin, among other\nthings, collects data about replication conflicts, which might be the\nculprit here).\n\n> Hope this helps. Please let me know if you need any other details.\n\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 20 Oct 2013 02:20:52 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "sparikh <[email protected]> wrote:\n\n> PostgreSQL 9.1.1\n\nYou really should apply the fixes for bugs and security\nvulnerabilities which are available.  Some of those may address a\nrelevant performance problem.\n\nhttp://www.postgresql.org/support/versioning/\n\n> the performance was good in September and somewhere in October it\n> started slowing down. I guess it was gradual. There were no code\n> change in the application or major change in the data volume.\n\nWe really need to see your configuration, along with a description\nof the machine.  One common cause for such a slowdown is not having\nautovacuum configured to be aggressive enough.  Another possible\ncause is a transaction which has been left open for too long.  Look\nat pg_stat_activity and pg_prepared_xacts for xact_start or\nprepared more than an hour old.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 20 Oct 2013 09:03:19 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Thanks so much Tomas and Kevin for your valuable inputs. I am getting very\ngood response from this forum and learning so many new stuffs. I will try\nall those options and will let you update .\n\n\nstandby_performance_issue.rar\n<http://postgresql.1045698.n5.nabble.com/file/n5775181/standby_performance_issue.rar> \n\n\nOn further digging I found from the new relic report that as soon as I\nexecute query IO spikes immediately (100%). But the same query on primary\nexecutes very fast.\n\nI am not sure if postgres has some utility like what oracle's tkprof or AWR\nwhere I can exactly pin point where exactly the query spends time. I will\ntry Tomas' suggestions perf and strace.\n\nBelow is the query. I also tried to attached rar file one more time,\nhopefully it gets through this time.\n\nSELECT xfer_id, transaction_type, evse_time, transmit_time, error_code,\ndetail, reported_card_account_number as reported_card_account_number,\nevent_id,\n event_code,evse_id, batch_id, port, charge_event_id as charge_event_id FROM\n(SELECT t.transaction_id::text AS xfer_id, t.transaction_type, e.event_time\nAS evse_time,\n t.create_date AS transmit_time, t.error_code::text, '' AS detail,\nCOALESCE(e.reported_rfid,'N/A') AS reported_card_account_number,\ne.event_id::text, e.event_code::text,\n t.evse_id::text, t.batch_id, e.port, COALESCE(e.evse_charge_id,'N/A') AS\ncharge_event_id \nFROM evse_transaction t, evse_event e , evse_unit u \nWHERE e.transaction_id = t.transaction_id AND t.evse_id = u.evse_id \nAND e.event_code IN\n('1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20','21','22','23','24','25','26','27','28','29','30','31','32','33','34','35','36','37','38','39','40','41','42','43','44','45','46','47','48','49')) \nAND u.evse_id = 1\n AND t.create_date BETWEEN '2013-10-01'::date AND '2013-10-15'::date +\nINTERVAL '1 day'\n UNION\n SELECT t.transaction_id::text AS xfer_id, t.transaction_type, t.log_time AS\nevse_time, t.create_date AS transmit_time, t.error_code::text, '' AS detail,\n COALESCE(t.reported_card_account_number,'N/A') AS\nreported_card_account_number, '' AS event_id, '' AS event_code,\nt.evse_id::text,t.batch_id, '' AS port, 'N/A' AS charge_event_id \nFROM evse_transaction t, evse_unit u \nWHERE t.evse_id = u.evse_id AND t.api_error IS NULL \nAND t.transaction_type NOT IN\n('DCFCTransactionService','L2TransactionService','EVSEUploadTransactionService','EVSEUploadTransactionService','UploadTransactionService') \nAND t.transaction_type IN\n('DCFCDownloadConfigService','L2DownloadConfigService','EVSEDownloadConfigService','DownloadConfigService','ConfigDownloadService','DCFCUploadConfigService','L2UploadConfigService','EVSEUploadConfigService','UploadConfigService','ConfigUploadService','L2GetAdPackageListService','AdPackageListService','L2GPSService','EVSEGPSService','GPSService','ReportErrorService','EVSEDownloadRevisionService','DCFCCommandService','L2CommandService','CommandService','DCFCErrorService','L2ErrorService','EVSEReportErrorService','ErrorService','DCFCHeartbeatService','L2HeartbeatService','HeartbeatService','DCFCAuthorizeService','L2AuthorizeService','AuthorizeService','DCFCGetAccessListService','L2GetAccessListService','GetAccessListService','DCFCSetAccessService','L2SetAccessService','SetAccessService','DCFCPackageDownloadService','L2PackageDownloadService','PackageDownloadService','DCFCReportInventoryService','L2ReportInventoryService','ReportInventoryService','DCFCTargetVersionService','L2TargetVersionService','TargetVersionService','DCFCPackageListService','L2PackageInfoService','PackageListService','DCFCPackageInfoService','L2PackageInfoService','PackageInfoService','DCFCRegisterService','L2AuthorizeCodeService',\n'AuthorizeCodeService') \n AND u.evse_id = 1 AND t.create_date BETWEEN '2013-10-01'::date AND\n'2013-10-15'::date + INTERVAL '1 day' \nUNION\n SELECT ef.fee_id::text AS xfer_id, 'FEE' as transaction_type, ef.event_time\nAS evse_time, ef.create_time AS transmit_time, \n'' AS error_code, 'Fee Event' AS detail, COALESCE(ef.card_account_number,\n'N/A') AS reported_card_account_number, '' AS event_id, '' AS event_code,\nef.evse_id::text, '' AS batch_id, \nef.port::text AS port, COALESCE(ef.client_charge_id, 'N/A') AS\ncharge_event_id \nFROM evse_fee ef LEFT OUTER JOIN evse_unit eu ON eu.evse_id = ef.evse_id \nWHERE ef.evse_id = 1 AND ef.create_time BETWEEN '2013-10-01'::date AND\n'2013-10-15'::date + INTERVAL '1 day'\n) x \nORDER BY transmit_time DESC LIMIT 500\n\n==========================================\n\nQuery plan:\n\nLimit (cost=101950.33..101950.40 rows=30 width=368) (actual\ntime=18.421..18.421 rows=0 loops=1)\n Output: ((t.transaction_id)::text), t.transaction_type, e.event_time,\nt.create_date, t.error_code, (''::text), (COALESCE(e.reported_rfid,\n'N/A'::text)), ((e.event_id)::text), ((e.event_code)::text),\n((t.evse_id)::text), t.batch_id, e.port, (COALESCE(e.evse_charge_id,\n'N/A'::text))\n Buffers: shared hit=5 read=7\n -> Sort (cost=101950.33..101950.40 rows=30 width=368) (actual\ntime=18.421..18.421 rows=0 loops=1)\n Output: ((t.transaction_id)::text), t.transaction_type,\ne.event_time, t.create_date, t.error_code, (''::text),\n(COALESCE(e.reported_rfid, 'N/A'::text)), ((e.event_id)::text),\n((e.event_code)::text), ((t.evse_id)::text), t.batch_id, e.port,\n(COALESCE(e.evse_charge_id, 'N/A'::text))\n Sort Key: t.create_date\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=5 read=7\n -> HashAggregate (cost=101948.99..101949.29 rows=30 width=95)\n(actual time=4.414..4.414 rows=0 loops=1)\n Output: ((t.transaction_id)::text), t.transaction_type,\ne.event_time, t.create_date, t.error_code, (''::text),\n(COALESCE(e.reported_rfid, 'N/A'::text)), ((e.event_id)::text),\n((e.event_code)::text), ((t.evse_id)::text), t.batch_id, e.port,\n(COALESCE(e.evse_charge_id, 'N/A'::text))\n Buffers: shared hit=5 read=5\n -> Append (cost=0.00..101948.01 rows=30 width=95) (actual\ntime=4.412..4.412 rows=0 loops=1)\n Buffers: shared hit=5 read=5\n -> Nested Loop (cost=0.00..101163.24 rows=10\nwidth=112) (actual time=4.397..4.397 rows=0 loops=1)\n Output: (t.transaction_id)::text,\nt.transaction_type, e.event_time, t.create_date, t.error_code, ''::text,\nCOALESCE(e.reported_rfid, 'N/A'::text), (e.event_id)::text,\n(e.event_code)::text, (t.evse_id)::text, t.batch_id, e.port,\nCOALESCE(e.evse_charge_id, 'N/A'::text)\n Buffers: shared read=4\n -> Index Scan using evse_unit_pkey on\npublic.evse_unit u (cost=0.00..8.72 rows=1 width=4) (actual\ntime=4.395..4.395 rows=0 loops=1)\n Output: u.evse_id\n Index Cond: (u.evse_id = 123)\n Buffers: shared read=4\n -> Nested Loop (cost=0.00..101154.22 rows=10\nwidth=112) (never executed)\n Output: t.transaction_id,\nt.transaction_type, t.create_date, t.error_code, t.evse_id, t.batch_id,\ne.event_time, e.reported_rfid, e.event_id, e.event_code, e.port,\ne.evse_charge_id\n -> Index Scan using\nevse_transaction_evse_id_create_date_idx on public.evse_transaction t \n(cost=0.00..380.04 rows=89 width=65) (never executed)\n Output: t.transaction_id,\nt.transaction_type, t.create_date, t.error_code, t.evse_id, t.batch_id\n Index Cond: ((t.evse_id = 123) AND\n(t.create_date >= '2013-10-07'::date) AND (t.create_date <= '2013-10-10\n00:00:00'::timestamp without time zone))\n -> Index Scan using\nevse_event_transaction_idx on public.evse_event e (cost=0.00..1131.07\nrows=98 width=51) (never executed)\n Output: e.event_id, e.transaction_id,\ne.event_code, e.event_name, e.event_row, e.event_time, e.status,\ne.status_detail, e.plug_event_id, e.charge_event_id, e.power_id, e.flow_id,\ne.port, e.event_source, e.evse_id, e.reported_rfid, e.evse_charge_id\n Index Cond: (e.transaction_id =\nt.transaction_id)\n Filter: (e.event_code = ANY\n('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49}'::integer[]))\n -> Nested Loop (cost=0.00..395.68 rows=18 width=88)\n(actual time=0.011..0.011 rows=0 loops=1)\n Output: (t.transaction_id)::text,\nt.transaction_type, t.log_time, t.create_date, t.error_code, ''::text,\nCOALESCE(t.reported_card_account_number, 'N/A'::text), ''::text, ''::text,\n(t.evse_id)::text, t.batch_id, ''::text, 'N/A'::text\n Buffers: shared hit=2 read=1\n -> Index Scan using evse_unit_pkey on\npublic.evse_unit u (cost=0.00..8.72 rows=1 width=4) (actual\ntime=0.010..0.010 rows=0 loops=1)\n Output: u.evse_id\n Index Cond: (u.evse_id = 1234)\n Buffers: shared hit=2 read=1\n -> Index Scan using\nevse_transaction_evse_id_create_date_idx on public.evse_transaction t \n(cost=0.00..386.60 rows=18 width=88) (never executed)\n Output: t.transaction_id,\nt.transaction_type, t.log_time, t.create_date, t.error_code,\nt.reported_card_account_number, t.evse_id, t.batch_id\n Index Cond: ((t.evse_id = 1234) AND\n(t.create_date >= '2013-10-07'::date) AND (t.create_date <= '2013-10-10\n00:00:00'::timestamp without time zone))\n Filter: ((t.api_error IS NULL) AND\n(t.transaction_type <> ALL\n('{DCFCTransactionService,L2TransactionService,EVSEUploadTransactionService,EVSEUploadTransactionService,UploadTransactionService}'::text[]))\nAND (t.transaction_type = ANY\n('{DCFCDownloadConfigService,L2DownloadConfigService,EVSEDownloadConfigService,DownloadConfigService,ConfigDownloadService,DCFCUploadConfigService,L2UploadConfigService,EVSEUploadConfigService,UploadConfigService,ConfigUploadService,L2GetAdPackageListService,AdPackageListService,L2GPSService,EVSEGPSService,GPSService,ReportErrorService,EVSEDownloadRevisionService,DCFCCommandService,L2CommandService,CommandService,DCFCErrorService,L2ErrorService,EVSEReportErrorService,ErrorService,DCFCHeartbeatService,L2HeartbeatService,HeartbeatService,DCFCAuthorizeService,L2AuthorizeService,AuthorizeService,DCFCGetAccessListService,L2GetAccessListService,GetAccessListService,DCFCSetAccessService,L2SetAccessService,SetAccessService,DCFCPackageDownloadService,L2PackageDownloadService,PackageDownloadService,DCFCReportInventoryService,L2ReportInventoryService,ReportInventoryService,DCFCTargetVersionService,L2TargetVersionService,TargetVersionService,DCFCPackageListService,L2PackageInfoService,PackageListService,DCFCPackageInfoService,L2PackageInfoService,PackageInfoService,DCFCRegisterService,L2AuthorizeCodeService,AuthorizeCodeService}'::text[])))\n -> Nested Loop (cost=0.00..388.80 rows=2 width=80)\n(actual time=0.002..0.002 rows=0 loops=1)\n Output: (t.transaction_id)::text, 'ERROR'::text,\nt.create_date, t.create_date, t.error_code, t.transaction_type,\nCOALESCE(t.reported_card_account_number, 'N/A'::text), ''::text, ''::text,\n(t.evse_id)::text, t.batch_id, ''::text, 'N/A'::text\n Buffers: shared hit=3\n -> Index Scan using evse_unit_pkey on\npublic.evse_unit u (cost=0.00..8.72 rows=1 width=4) (actual\ntime=0.002..0.002 rows=0 loops=1)\n Output: u.evse_id\n Index Cond: (u.evse_id = 1234)\n Buffers: shared hit=3\n -> Index Scan using\nevse_transaction_evse_id_create_date_idx on public.evse_transaction t \n(cost=0.00..380.04 rows=2 width=80) (never executed)\n Output: t.transaction_id, t.create_date,\nt.error_code, t.transaction_type, t.reported_card_account_number, t.evse_id,\nt.batch_id\n Index Cond: ((t.evse_id = 1234) AND\n(t.create_date >= '2013-10-07'::date) AND (t.create_date <= '2013-10-10\n00:00:00'::timestamp without time zone))\n Filter: (t.api_error IS NOT NULL)\nTotal runtime: 18.611 ms\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775181.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 20 Oct 2013 10:58:21 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Hi,\n\nOn 20.10.2013 19:58, sparikh wrote:\n> Thanks so much Tomas and Kevin for your valuable inputs. I am getting\n> very good response from this forum and learning so many new stuffs. I\n> will try all those options and will let you update .\n> \n> \n> standby_performance_issue.rar \n> <http://postgresql.1045698.n5.nabble.com/file/n5775181/standby_performance_issue.rar>\n\nYup, this time it worked. Anyway, next time please consider posting the\nexplain plan through explain.depesz.com, it's way more readable than the\nplans wrapped when posted inline.\n\nFor example this is your plan: http://explain.depesz.com/s/SBVg\n\nHowever this shows only 18 ms runtime. Is this one of the slow runs? I'm\nassuming it's not, because 18ms seems quite fast tome. In that case it's\npretty much useless, because we need to see a plan for one of the slow\nqueries.\n\nStupid question - when you say that a query is fast on primary but slow\non standby, are you referring to exactly the same query, including\nparameter values?\n\nOr is the query running much longer than the reported 18 ms?\n\n> On further digging I found from the new relic report that as soon as\n> I execute query IO spikes immediately (100%). But the same query on\n> primary executes very fast.\n\nWell, that might just as well mean that the primary has the data in\nfilesystem cache, and standby needs to read that from file. If you\nrepeat the query multiple times, do you still see I/O spike?\n\nHow do you use the standby? Is it just sitting there most of the time,\nor is it queried about as much as the primary?\n\nBTW when checking the configuration info you've sent, I've noticed this:\n\n \"hot_standby_feedback\",\"off\"\n\nIIRC you've reported the query frequently crashes on the standby because\nof replication conflicts. Why don't you set this to on?\n\n> I am not sure if postgres has some utility like what oracle's tkprof\n> or AWR where I can exactly pin point where exactly the query spends\n> time. I will try Tomas' suggestions perf and strace.\n\nNo, at least the community version. But the explain analyze is usually a\ngood source guide.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 21 Oct 2013 01:01:11 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Stupid question - when you say that a query is fast on primary but slow \non standby, are you referring to exactly the same query, including \nparameter values? \n\nYes . It is exactly and exactly the same query with the same parameters. \nYes, it sounds stupid but that is what happening. Though plan says it is\n18ms it runs for more than 15-20 mins and finally returns with conflict\nerror : \" ERROR: canceling statement due to conflict with recovery \"\n\nEven the to run execute plan itself takes very long on standby. Just to get\nthe execute plan on standby is turning out big deal. \n\nRegarding IO spike, yes I can understand that if data is not available in\nthe memory then it has to get it from disk. But the thing is it remains\nthere as much time until query returns with Query conflict error.\n\nThanks again.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775257.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 21 Oct 2013 08:05:33 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 21.10.2013 17:05, sparikh wrote:\n> Stupid question - when you say that a query is fast on primary but\n> slow on standby, are you referring to exactly the same query,\n> including parameter values?\n> \n> Yes . It is exactly and exactly the same query with the same\n> parameters. Yes, it sounds stupid but that is what happening. Though\n> plan says it is 18ms it runs for more than 15-20 mins and finally\n> returns with conflict error : \" ERROR: canceling statement due to\n> conflict with recovery \"\n\nOK.\n\n> Even the to run execute plan itself takes very long on standby. Just\n> to get the execute plan on standby is turning out big deal.\n\nDo you mean EXPLAIN or EXPLAIN ANALYZE?\n\nSo far we've seen just EXPLAIN ANALYZE - can you try just EXPLAIN? If it\nlocks, it's either because of something expensive in the planning, or\nlocking.\n\nThe locking is much more likely, because the primary is behaving just\nfine and the resulting plan is exactly the same on both ends.\n\n> Regarding IO spike, yes I can understand that if data is not\n> available in the memory then it has to get it from disk. But the\n> thing is it remains there as much time until query returns with Query\n> conflict error.\n\nI don't think the I/O is a problem at all, because the query takes just\n18 milliseconds. However that does not include planning, so either a lot\nof time spent waiting for a lock or doing a lot of stuff on CPU, won't\nbe reported here.\n\nWhat you can do to debug this is either look at pg_locks on the standby\nfor connections with \"granted=f\", or connect using psql and do this\n\n set log_lock_waits = true;\n set client_min_messages = log;\n\n EXPLAIN ... query ...;\n\nand it should print what locks the connection is waiting for. Then you\nmay investigate further, e.g. check who's holding the lock in\npg_stat_activity etc.\n\nBut again, I think spending a single minute on this before upgrading to\nthe current version is a waste of time.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 21 Oct 2013 21:58:56 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Yes, both Explain and Explain Analyse are taking time. As you suggested I set\nthe lock parameters, but no locks are observed. Also checked\npg_stat_activity and none of the sessions are either waiting are blocked.\n\nI agree we must upgrade to latest version (9.1.10), but unfortunately kind\nof resources (not only man power) we are having it is going to be extremely\nchallenging task for us. Of course all other options are not working then we\nhave to take the tough route. No choice.\n\nI am also working with sys admin to rule any issues at the OS or VM level.\n\nThanks. \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775332.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 21 Oct 2013 14:18:52 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 21.10.2013 23:18, sparikh wrote:\n> Yes, both Explain and Explain Analyse are taking time. As you \n> suggested I set the lock parameters, but no locks are observed. Also \n> checked pg_stat_activity and none of the sessions are either waiting \n> are blocked.\n\nNot even the one running the explain? That's weird. Is the backend just\nsitting there idle, or is it consuming some CPU?\n\nIf it's burning CPU, try to run perf to see where it's spending time.\nWe've already recommended this at least twice, but I haven't seen any\nreport yet. This might show if there are any issues with spinlocks\n(which don't show in pg_locks etc.).\n\nWhat about long-open transactions, not necessarily blocked/waiting?\n\n SELECT xact_start, current_query, waiting\n FROM pg_stat_activity WHERE xact_start ASC LIMIT 10;\n\n> I agree we must upgrade to latest version (9.1.10), but\n> unfortunately kind of resources (not only man power) we are having it\n> is going to be extremely challenging task for us. Of course all other\n> options are not working then we have to take the tough route. No\n> choice.\n\nIt's not really about this particular issue. As both me and Kevin\npointed out, there are some pretty important ecurity fixes etc. You need\nto update irrespectedly of this issue.\n\nBTW minor upgrades (including 9.1.1 -> 9.1.10) are supposed to be\nrelatively simple, as the the format remains the same. So it's a matter\nof shutting down the machine, updating the binaries and starting again.\nOf course, I'm not familiar with your setup and requirements.\n\n> I am also working with sys admin to rule any issues at the OS or VM \n> level.\n\nOK, good. Post a summary of what you checked.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 21 Oct 2013 23:54:22 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Yes, Expalin without Analyze is taking long. It is weird. In the\npg_stat_activity Explain was the only query running. So server was almost\nidle. Using New relic interface I checked CPU was almost idle - around\n10-20%. There were some IO activity - around 40-50%.\n\nI forgot to mention before I could run perf on command line even with root\npermission. It says command not found. May be utility is not installed or\nnot enabled.\n\nI have attached the snapshot of vmstat while explain was running in\nbackground. vmstat.txt\n<http://postgresql.1045698.n5.nabble.com/file/n5775349/vmstat.txt> \n\nDo you suggest if I remove all the data files from /data/base folder of\nstandby and again rebuild using rsync from primary ? do you see any issues\nthere.? This is just to rule out any fragmentation on standby side.\n\nOur sys admin is planning to run fsck sometime today or tomorrow.\n\nThanks.\n\n\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775349.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 21 Oct 2013 15:59:37 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 22.10.2013 00:59, sparikh wrote:\n> Yes, Expalin without Analyze is taking long. It is weird. In the \n> pg_stat_activity Explain was the only query running. So server was\n> almost idle. Using New relic interface I checked CPU was almost idle\n> - around 10-20%. There were some IO activity - around 40-50%.\n> \n> I forgot to mention before I could run perf on command line even with\n> root permission. It says command not found. May be utility is not\n> installed or not enabled.\n\nObviously you need to install it ... maybe ask your sysadmin to do that.\n\n> I have attached the snapshot of vmstat while explain was running in \n> background. vmstat.txt \n> <http://postgresql.1045698.n5.nabble.com/file/n5775349/vmstat.txt>\n\nThe vmstat clearly shows that ~1 CPU is waiting on I/O. Hmm, I'm really\nwondering what's going on here - I can't think of a case where this\nwould happen with a plain EXPLAIN ...\n\nWe really need the perf results. Or try to run strace, maybe it'll give\nmore info about which files it accesses.\n\n> Do you suggest if I remove all the data files from /data/base folder\n> of standby and again rebuild using rsync from primary ? do you see\n> any issues there.? This is just to rule out any fragmentation on\n> standby side.\n\nThe EXPLAIN really should not do much I/O. I doubt it has anything to do\nwith fragmentation, so I doubt this is going to help.\n\n> Our sys admin is planning to run fsck sometime today or tomorrow.\n\nOK. Which filesystem do you use, btw?\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Oct 2013 01:14:42 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "> Do you suggest if I remove all the data files from /data/base folder \n> of standby and again rebuild using rsync from primary ? do you see \n> any issues there.? This is just to rule out any fragmentation on \n> standby side. \n\nThe EXPLAIN really should not do much I/O. I doubt it has anything to do \nwith fragmentation, so I doubt this is going to help. \n\nActually I was referring to this in the context of addressing main\nunderlying performance issue, not EXPLAIN. Sorry, I may not have\ncommunicated it correctly.\n\nEven strance does not seem to be installed.\n\nThe filesytem type it shows to me ext3.\n\nThanks.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775361.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 21 Oct 2013 17:00:45 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On Oct 22, 2013 1:14 AM, \"Tomas Vondra\" <[email protected]> wrote:\n>\n> On 22.10.2013 00:59, sparikh wrote:\n> > Yes, Expalin without Analyze is taking long. It is weird. In the\n> > pg_stat_activity Explain was the only query running. So server was\n> > almost idle. Using New relic interface I checked CPU was almost idle\n> > - around 10-20%. There were some IO activity - around 40-50%.\n> >\n> > I forgot to mention before I could run perf on command line even with\n> > root permission. It says command not found. May be utility is not\n> > installed or not enabled.\n>\n> Obviously you need to install it ... maybe ask your sysadmin to do that.\n>\n> > I have attached the snapshot of vmstat while explain was running in\n> > background. vmstat.txt\n> > <http://postgresql.1045698.n5.nabble.com/file/n5775349/vmstat.txt>\n>\n> The vmstat clearly shows that ~1 CPU is waiting on I/O. Hmm, I'm really\n> wondering what's going on here - I can't think of a case where this\n> would happen with a plain EXPLAIN ...\n\nCatalog bloat could make that happen. Though that should show up on the\nmaster as well, it could be that it's cached there and therefor only shows\nus to as cpu and not io and is therefore not noticed.\n\n/Magnus\n\n\nOn Oct 22, 2013 1:14 AM, \"Tomas Vondra\" <[email protected]> wrote:\n>\n> On 22.10.2013 00:59, sparikh wrote:\n> > Yes, Expalin without Analyze is taking long. It is weird. In the\n> > pg_stat_activity Explain was the only query running. So server was\n> > almost idle. Using New relic interface I checked CPU was almost idle\n> > - around 10-20%. There were some IO activity - around 40-50%.\n> >\n> > I forgot to mention before I could run perf on command line even with\n> > root permission. It says command not found. May be utility is not\n> > installed or not enabled.\n>\n> Obviously you need to install it ... maybe ask your sysadmin to do that.\n>\n> > I have attached the snapshot of vmstat while explain was running in\n> > background. vmstat.txt\n> > <http://postgresql.1045698.n5.nabble.com/file/n5775349/vmstat.txt>\n>\n> The vmstat clearly shows that ~1 CPU is waiting on I/O. Hmm, I'm really\n> wondering what's going on here - I can't think of a case where this\n> would happen with a plain EXPLAIN ...\nCatalog bloat could make that happen. Though that should show up on the master as well, it could be that it's cached there and therefor only shows us to as cpu and not io and is therefore not noticed. \n\n/Magnus", "msg_date": "Tue, 22 Oct 2013 06:49:24 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 22.10.2013 06:49, Magnus Hagander wrote:\n> \n> On Oct 22, 2013 1:14 AM, \"Tomas Vondra\" <[email protected]\n> <mailto:[email protected]>> wrote:\n>>\n>> On 22.10.2013 00:59, sparikh wrote:\n>> > Yes, Expalin without Analyze is taking long. It is weird. In the\n>> > pg_stat_activity Explain was the only query running. So server was\n>> > almost idle. Using New relic interface I checked CPU was almost idle\n>> > - around 10-20%. There were some IO activity - around 40-50%.\n>> >\n>> > I forgot to mention before I could run perf on command line even with\n>> > root permission. It says command not found. May be utility is not\n>> > installed or not enabled.\n>>\n>> Obviously you need to install it ... maybe ask your sysadmin to do that.\n>>\n>> > I have attached the snapshot of vmstat while explain was running in\n>> > background. vmstat.txt\n>> > <http://postgresql.1045698.n5.nabble.com/file/n5775349/vmstat.txt>\n>>\n>> The vmstat clearly shows that ~1 CPU is waiting on I/O. Hmm, I'm really\n>> wondering what's going on here - I can't think of a case where this\n>> would happen with a plain EXPLAIN ...\n> \n> Catalog bloat could make that happen. Though that should show up on\n> the master as well, it could be that it's cached there and therefor\n> only shows us to as cpu and not io and is therefore not noticed.\n\nMaybe, but sparikh reported the query to be running for ~20 minutes.\nThat'd be hell of a bloat.\n\nSparikh, can you show us the size of system catalogs? Something like\n\n SELECT relname, relpages FROM pg_class\n WHERE relname LIKE 'pg%'\n ORDER BY relpages DESC LIMIT 20;\n\nShould give the same results both on primary and standby.\n\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Oct 2013 19:54:11 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 22.10.2013 02:00, sparikh wrote:\n>> Do you suggest if I remove all the data files from /data/base folder \n>> of standby and again rebuild using rsync from primary ? do you see \n>> any issues there.? This is just to rule out any fragmentation on \n>> standby side. \n> \n> The EXPLAIN really should not do much I/O. I doubt it has anything to do \n> with fragmentation, so I doubt this is going to help. \n> \n> Actually I was referring to this in the context of addressing main\n> underlying performance issue, not EXPLAIN. Sorry, I may not have\n> communicated it correctly.\n> \n> Even strance does not seem to be installed.\n\nIt's 'strace' (aka syscall trace), not 'strance'. Please install both\nperf and strace and try to collect some information about the backend\nexecuting the slow query. We're mostly speculating and we need the data.\n\nTry perf first - it's basically a profiler and the results are usually\nunderstandable. Even a simple \"perf top\" can give us a hint.\n\nStrace is much more low-level and much more difficult to analyze.\n\n> The filesytem type it shows to me ext3.\n\nOK. Not the best filesystem IMHO, but I doubt it's related to the issue\nwe're discussing here.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Oct 2013 20:00:34 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": " From Primary:\n\nrelname\trelpages\npg_toast_17673\t1812819\npg_toast_17594\t161660\npg_toast_17972\t121902\npg_toast_17587\t77190\npg_toast_18537\t29108\npg_toast_17578\t26638\npg_toast_17673_index\t19984\npg_toast_17868\t14911\npg_toast_17594_index\t2208\npg_toast_1072246\t1922\npg_toast_17587_index\t1510\npg_toast_17972_index\t1399\npg_statistic\t911\npg_toast_18694\t883\npg_toast_17578_index\t375\npg_attribute\t336\npg_toast_16475\t332\npg_toast_18537_index\t321\npg_proc\t233\npg_depend_depender_index\t176\n\n From Secondary :\n============\nrelname\trelpages\npg_toast_17673\t1812819\npg_toast_17594\t161660\npg_toast_17972\t121902\npg_toast_17587\t77190\npg_toast_18537\t29108\npg_toast_17578\t26638\npg_toast_17673_index\t19984\npg_toast_17868\t14911\npg_toast_17594_index\t2208\npg_toast_1072246\t1922\npg_toast_17587_index\t1510\npg_toast_17972_index\t1399\npg_statistic\t911\npg_toast_18694\t883\npg_toast_17578_index\t375\npg_attribute\t336\npg_toast_16475\t332\npg_toast_18537_index\t321\npg_proc\t233\npg_depend_depender_index\t176\n\nYes, result looks same both on primary and standby.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775526.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Oct 2013 14:41:19 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Sorry, it was typo from my side. I meant strace only.\n\nI will try to request both perf and strace to be installed. But I am not\nquite sure as the VMs are managed by third party. Will keep you posted...\n\nThe main thing puzzling to me is Explain Plan with Analyze takes couple of\nsecs to execute the operation but in reality it runs for more than 20 mins.\n\nThanks.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775529.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Oct 2013 14:50:27 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 22.10.2013 23:41, sparikh wrote:\n>>From Primary:\n> \n> relname\trelpages\n> pg_toast_17673\t1812819\n> pg_toast_17594\t161660\n> pg_toast_17972\t121902\n> pg_toast_17587\t77190\n> pg_toast_18537\t29108\n> pg_toast_17578\t26638\n> pg_toast_17673_index\t19984\n> pg_toast_17868\t14911\n> pg_toast_17594_index\t2208\n> pg_toast_1072246\t1922\n> pg_toast_17587_index\t1510\n> pg_toast_17972_index\t1399\n> pg_statistic\t911\n> pg_toast_18694\t883\n> pg_toast_17578_index\t375\n> pg_attribute\t336\n> pg_toast_16475\t332\n> pg_toast_18537_index\t321\n> pg_proc\t233\n> pg_depend_depender_index\t176\n> \n>>From Secondary :\n> ============\n> relname\trelpages\n> pg_toast_17673\t1812819\n> pg_toast_17594\t161660\n> pg_toast_17972\t121902\n> pg_toast_17587\t77190\n> pg_toast_18537\t29108\n> pg_toast_17578\t26638\n> pg_toast_17673_index\t19984\n> pg_toast_17868\t14911\n> pg_toast_17594_index\t2208\n> pg_toast_1072246\t1922\n> pg_toast_17587_index\t1510\n> pg_toast_17972_index\t1399\n> pg_statistic\t911\n> pg_toast_18694\t883\n> pg_toast_17578_index\t375\n> pg_attribute\t336\n> pg_toast_16475\t332\n> pg_toast_18537_index\t321\n> pg_proc\t233\n> pg_depend_depender_index\t176\n> \n> Yes, result looks same both on primary and standby.\n\nYes. And it also shows that the really interesting tables (e.g.\npg_class) are not bloated.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Oct 2013 01:09:51 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 22.10.2013 23:50, sparikh wrote:\n> Sorry, it was typo from my side. I meant strace only.\n\nOK.\n\n> \n> I will try to request both perf and strace to be installed. But I am\n> not quite sure as the VMs are managed by third party. Will keep you\n> posted...\n\nWhat do you mean by VM? Is this a virtualized environment or bare hardware?\n\n> \n> The main thing puzzling to me is Explain Plan with Analyze takes\n> couple of secs to execute the operation but in reality it runs for\n> more than 20 mins.\n\nSo, now I'm getting confused. Can you please provide timings for each\ncase. I.e. how long it takes to execute\n\n1) plain query\n2) explain query\n3) explain analyze query\n\nExecute each of those a couple of times, and let us know about\nsignificant variations.\n\nIt should always be\n\n EXPLAIN ANALYZE >= query >= EXPLAIN\n\nIf you're reporting you see \"EXPLAIN ANALYZE < query\" then I find that\nreally strange.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Oct 2013 01:15:14 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": " I will try to request both perf and strace to be installed. But I am \n> not quite sure as the VMs are managed by third party. Will keep you \n> posted... \n\nWhat do you mean by VM? Is this a virtualized environment or bare hardware? \n\nYes, they are virtualized environments. \n\nSorry about the confusion. But I was just telling from based on the explain\nplan report. e.g at the bottom of explain plan report it says \"Total\nruntime: 1698.453 ms\" (This is with analyze option).\n\nBut from the client perspective (either run from pgadmin or directly from\nthe server command line) it takes more that 20 min to display the output.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775550.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Oct 2013 18:23:20 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Today morning I found that the performance issue on standby database was\nfixed by itself. On further investigation I found that one of the biggest \nused in this query had autovacuum kicked in yesterday on primary. The last\ntime it had autovaccum ran was on Sep 30th.\n\nI am suspecting that this should have been fixed the issue. The table has\nupdate and delete operations. Only thing I did not understand why postgres\ndid not pick this table for autovacuum all these days, in spite of this\ntable is one of the busiest table from DML perspective. I was monitoring the\nprimary database activity all these days and always could see autovacuum was\nrunning on another tables.\n\nThanks.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5775972.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 25 Oct 2013 14:22:51 -0700 (PDT)", "msg_from": "ramistuni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 25.10.2013 23:22, ramistuni wrote:\n> Today morning I found that the performance issue on standby database\n> was fixed by itself. On further investigation I found that one of the\n> biggest used in this query had autovacuum kicked in yesterday on\n> primary. The last time it had autovaccum ran was on Sep 30th.\n> \n> I am suspecting that this should have been fixed the issue. The table\n> has update and delete operations. Only thing I did not understand why\n> postgres did not pick this table for autovacuum all these days, in\n> spite of this table is one of the busiest table from DML perspective.\n> I was monitoring the primary database activity all these days and\n> always could see autovacuum was running on another tables.\n\nHi,\n\nI don't see how this would explain the issues you've reported, i.e.\nquery running fast on primary and very slow on standby. That suggests\nthe problem is somehow connected to the replication conflict resolution.\nHowever I don't see a reason why a query should take so much longer\nbefore failing due to a conflict.\n\nTo find out why the autovacuum didn't trigger on the largest/busiest\ntable, you should probably check your logs for autovacuum failures\nand/or cancels.\n\nThen we'll need to know the basic features of the table (most\nimportantly how many rows are there), and autovacuum thresholds. It's\npossible that the table is simply way bigger than the other tables, and\nthus it takes more time to accumulate enough \"dead rows\" to trigger\nautovacuum. Or maybe most of the cleanup tasks was handled by HOT, i.e.\nnot requiring a vacuum at all.\n\nI think you need to check these fields in pg_stat_all_tables\n\n SELECT n_live_tup, n_dead_tup,\n n_tup_ins, n_tup_upd, n_tup_del, n_tup_hot_upd\n FROM pg_stat_all_tables\n WHERE relname = '... tablename ...'\n\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 26 Oct 2013 16:16:26 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Hi,\n\nYes, you are right. The table is the biggest one . Please find below the\ninformation you requested. I agree the fact that autovacuum ran on this\ntable would fix the performance issue on standby does not sound very\nconvincing. But that is the only thing I could correlate when the query on\nstandby started working again. Otherwise there is absolutely no changes at\ncode level , database level or OS level.\nAs of now query is still working fine on standby.\n\nI may be wrong, but could it be the case that standby disk was too much\nfragmented compare to primary and autovaccum on primary fixed that.\n(Assuming autovacuum on primary internally triggers the same on standby)\n\n\nSequential Scans\t18\t\nSequential Tuples Read\t1355777067\t\nIndex Scans\t102566124\t\nIndex Tuples Fetched\t67155748\t\nTuples Inserted\t16579520\t\nTuples Updated\t17144291\t\nTuples Deleted\t24383607\t\nTuples HOT Updated\t1214531\t\nLive Tuples\t101712125\t\nDead Tuples\t3333207\t\nHeap Blocks Read\t420703920\t\nHeap Blocks Hit\t496135814\t\nIndex Blocks Read\t66807468\t\nIndex Blocks Hit\t916783267\t\nToast Blocks Read\t310677\t\nToast Blocks Hit\t557735\t\nToast Index Blocks Read\t6959\t\nToast Index Blocks Hit\t936473\t\nLast Vacuum\t\t\nLast Autovacuum\t2013-10-25 02:47:09.914775-04\t\nLast Analyze\t\t\nLast Autoanalyze\t2013-10-25 18:39:25.386091-04\t\nVacuum counter\t0\t\nAutovacuum counter\t2\t\nAnalyze counter\t0\t\nAutoanalyze counter\t4\t\nTable Size\t46 GB\t\nToast Table Size\t615 MB\t\nIndexes Size\t20 GB\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5776156.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 28 Oct 2013 13:23:34 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "Table statistics I sent before were from primary. Following are from standby.\n\nIndex Tuples Fetched\t25910277\t\nTuples Inserted\t0\t\nTuples Updated\t0\t\nTuples Deleted\t0\t\nTuples HOT Updated\t0\t\nLive Tuples\t0\t\nDead Tuples\t0\t\nHeap Blocks Read\t138482386\t\nHeap Blocks Hit\t1059169445\t\nIndex Blocks Read\t4730561\t\nIndex Blocks Hit\t9702556\t\nToast Blocks Read\t1165\t\nToast Blocks Hit\t82\t\nToast Index Blocks Read\t85\t\nToast Index Blocks Hit\t3055\t\nLast Vacuum\t\t\nLast Autovacuum\t\t\nLast Analyze\t\t\nLast Autoanalyze\t\t\nVacuum counter\t0\t\nAutovacuum counter\t0\t\nAnalyze counter\t0\t\nAutoanalyze counter\t0\t\nTable Size\t46 GB\t\nToast Table Size\t615 MB\t\nIndexes Size\t20 GB\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hot-Standby-performance-issue-tp5774673p5776160.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 28 Oct 2013 13:57:00 -0700 (PDT)", "msg_from": "sparikh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 28.10.2013 21:57, sparikh wrote:\n> Table statistics I sent before were from primary. Following are from standby.\n> \n> Index Tuples Fetched\t25910277\t\n> Tuples Inserted\t0\t\n> Tuples Updated\t0\t\n> Tuples Deleted\t0\t\n> Tuples HOT Updated\t0\t\n> Live Tuples\t0\t\n> Dead Tuples\t0\t\n> Heap Blocks Read\t138482386\t\n> Heap Blocks Hit\t1059169445\t\n> Index Blocks Read\t4730561\t\n> Index Blocks Hit\t9702556\t\n> Toast Blocks Read\t1165\t\n> Toast Blocks Hit\t82\t\n> Toast Index Blocks Read\t85\t\n> Toast Index Blocks Hit\t3055\t\n> Last Vacuum\t\t\n> Last Autovacuum\t\t\n> Last Analyze\t\t\n> Last Autoanalyze\t\t\n> Vacuum counter\t0\t\n> Autovacuum counter\t0\t\n> Analyze counter\t0\t\n> Autoanalyze counter\t0\t\n> Table Size\t46 GB\t\n> Toast Table Size\t615 MB\t\n> Indexes Size\t20 GB\n\nWhy have you skipped some of the rows posted for primary? E.g. the\nsequential scans info?\n\nAnyway, I think new data are not going to help us as the issue resolved\nsomehow, so the current data are unlikely to show the original cause.\n\nYou can either wait whether it happens again, or dig in the logs to see\nif / why the autovacuum was not running on this table.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 02 Nov 2013 20:33:53 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" }, { "msg_contents": "On 28.10.2013 21:23, sparikh wrote:\n> Hi,\n> \n> Yes, you are right. The table is the biggest one . Please find below the\n> information you requested. I agree the fact that autovacuum ran on this\n> table would fix the performance issue on standby does not sound very\n> convincing. But that is the only thing I could correlate when the query on\n> standby started working again. Otherwise there is absolutely no changes at\n> code level , database level or OS level.\n> As of now query is still working fine on standby.\n> \n> I may be wrong, but could it be the case that standby disk was too much\n> fragmented compare to primary and autovaccum on primary fixed that.\n> (Assuming autovacuum on primary internally triggers the same on standby)\n\nI find it very unlikely, but you didn't gave us necessary data (say, how\nmuch free space was on the disks, etc.). The best way to pinpoint the\nissue would be to run some profiler (which we have repeatedly asked you\nto do), but now that the issue disappeared we can only guess.\n\nPlease monitor the system and if it happens again run perf or other\nprofiler so that we know where the time is spent.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 02 Nov 2013 20:40:23 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Standby performance issue" } ]
[ { "msg_contents": "I have a pretty simple parent-child relationship, where parents are\nsegmented into many bins (actually states). I need to query over the\n(parent, child) join but filter based on aggregates of the parent. That\nis -- all parent, child pairs for parents that are in bin x and also\nhave more than y children. Here's what the data looks like:\n\n CREATE TABLE parent (\n id SERIAL PRIMARY KEY,\n bin INTEGER\n );\n\n CREATE INDEX foo ON parent(bin);\n\n CREATE TABLE child (\n parent_id INTEGER REFERENCES parent(id),\n data INTEGER\n );\n\n INSERT INTO parent (bin) SELECT s.a % 50 FROM generate_series(1,\n100000) AS s(a);\n\n INSERT INTO CHILD (parent_id, data) SELECT id, floor(random() * 50)\nFROM parent;\n INSERT INTO CHILD (parent_id, data) SELECT id, floor(random() * 50)\nFROM parent;\n INSERT INTO CHILD (parent_id, data) SELECT id, floor(random() * 50)\nFROM parent;\n\n ANALYZE parent;\n ANALYZE child;\n\nAnd the query: (1)\n\n SELECT *\n FROM parent\n INNER JOIN child ON child.parent_id = parent.id\n LEFT JOIN (SELECT parent.id, COUNT(*) AS c\n FROM parent\n INNER JOIN child ON child.parent_id = parent.id\n WHERE child.data > 25\n GROUP BY 1) agg1 ON agg1.id = parent.id\n WHERE parent.bin = 1\n AND agg1.c >= 3;\n\nThis does not perform very well, because the subquery is calculated across\nall bins, even the 49 that will be discarded by the base query. The\nquery plan: http://explain.depesz.com/s/Ty4. I feel like the planner\nshould be able to move the bin condition into the subquery, like this: (2)\n\n SELECT *\n FROM parent\n INNER JOIN child ON child.parent_id = parent.id\n LEFT JOIN (SELECT parent.id, COUNT(*) AS c\n FROM parent\n INNER JOIN child ON child.parent_id = parent.id\n WHERE child.data > 25\n -- manually move the base query's condition into the subquery\n AND parent.bin = 1\n GROUP BY 1) agg1 ON agg1.id = parent.id\n WHERE parent.bin = 1\n AND agg1.c >= 3;\n\nThis query produces the ideal query plan\n(http://explain.depesz.com/s/8aRo), but it feels like we're doing the\nplanner's work for it. This SQL is generated from a reporting interface,\nso it would be nice if this stuff could be figured out automatically. I\nknow there are other ways to write this query, but this style of joining\nan aggregation is really nice for reporting. I actually end up joining\nseveral different aggregations and produce a condition across all of them.\n\nNow, maybe the planner doesn't know about the primary key, and that if\nparent.id is the same, parent.bin must be the same. Let's try to give\nit this information as part of the join clause: (3)\n\n SELECT *\n FROM parent\n INNER JOIN child ON child.parent_id = parent.id\n LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c\n FROM parent\n INNER JOIN child ON child.parent_id = parent.id\n WHERE child.data > 25\n GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin\n WHERE parent.bin = 1\n AND agg1.c >= 3;\n\nThis works! Well, at first. As soon as we say `bin IN (1, 2)`\ninstead of `bin = 1`, the query plan falls down again:\nhttp://explain.depesz.com/s/u7R: (4)\n\n SELECT *\n FROM parent\n INNER JOIN child ON child.parent_id = parent.id\n LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c\n FROM parent\n INNER JOIN child ON child.parent_id = parent.id\n WHERE child.data > 25\n GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin\n WHERE parent.bin IN (1, 2)\n AND agg1.c >= 3;\n\nNote that again, moving the condition inside the subquery produces the\ncorrect plan.\n\nIt'd be nice if the planner could optimize the query (1) by turning it\ninto (2). I understand that it might not be able to, but if it can pull\nthe condition up in (3), why can't it in (4)?\n\nPS: This is on postgres 9.3\n\nI have a pretty simple parent-child relationship, where parents aresegmented into many bins (actually states). I need to query over the(parent, child) join but filter based on aggregates of the parent. That\nis -- all parent, child pairs for parents that are in bin x and alsohave more than y children. Here's what the data looks like:    CREATE TABLE parent (      id SERIAL PRIMARY KEY,\n      bin INTEGER    );    CREATE INDEX foo ON parent(bin);    CREATE TABLE child (      parent_id INTEGER REFERENCES parent(id),      data INTEGER\n    );    INSERT INTO parent (bin) SELECT s.a % 50 FROM generate_series(1, 100000) AS s(a);    INSERT INTO CHILD (parent_id, data) SELECT id, floor(random() * 50) FROM parent;\n    INSERT INTO CHILD (parent_id, data) SELECT id, floor(random() * 50) FROM parent;    INSERT INTO CHILD (parent_id, data) SELECT id, floor(random() * 50) FROM parent;    ANALYZE parent;\n    ANALYZE child;And the query: (1)    SELECT *    FROM parent    INNER JOIN child ON child.parent_id = parent.id    LEFT JOIN (SELECT parent.id, COUNT(*) AS c\n\n      FROM parent      INNER JOIN child ON child.parent_id = parent.id      WHERE child.data > 25      GROUP BY 1) agg1 ON agg1.id = parent.id\n\n    WHERE parent.bin = 1     AND agg1.c >= 3;This does not perform very well, because the subquery is calculated acrossall bins, even the 49 that will be discarded by the base query. The\nquery plan: http://explain.depesz.com/s/Ty4. I feel like the plannershould be able to move the bin condition into the subquery, like this: (2)\n    SELECT *    FROM parent    INNER JOIN child ON child.parent_id = parent.id    LEFT JOIN (SELECT parent.id, COUNT(*) AS c\n      FROM parent      INNER JOIN child ON child.parent_id = parent.id      WHERE child.data > 25        -- manually move the base query's condition into the subquery\n        AND parent.bin = 1      GROUP BY 1) agg1 ON agg1.id = parent.id    WHERE parent.bin = 1     AND agg1.c >= 3;\nThis query produces the ideal query plan(http://explain.depesz.com/s/8aRo), but it feels like we're doing theplanner's work for it. This SQL is generated from a reporting interface,\nso it would be nice if this stuff could be figured out automatically. Iknow there are other ways to write this query, but this style of joiningan aggregation is really nice for reporting. I actually end up joining\nseveral different aggregations and produce a condition across all of them.Now, maybe the planner doesn't know about the primary key, and that ifparent.id is the same, parent.bin must be the same. Let's try to give\nit this information as part of the join clause: (3)    SELECT *    FROM parent    INNER JOIN child ON child.parent_id = parent.id\n\n    LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c      FROM parent      INNER JOIN child ON child.parent_id = parent.id\n\n      WHERE child.data > 25      GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin    WHERE parent.bin = 1\n\n     AND agg1.c >= 3;This works! Well, at first. As soon as we say `bin IN (1, 2)` instead of `bin = 1`, the query plan falls down again:http://explain.depesz.com/s/u7R: (4)\n    SELECT *    FROM parent    INNER JOIN child ON child.parent_id = parent.id    LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c\n      FROM parent      INNER JOIN child ON child.parent_id = parent.id      WHERE child.data > 25      GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin\n    WHERE parent.bin IN (1, 2)     AND agg1.c >= 3;Note that again, moving the condition inside the subquery produces thecorrect plan.It'd be nice if the planner could optimize the query (1) by turning it\ninto (2). I understand that it might not be able to, but if it can pullthe condition up in (3), why can't it in (4)?PS: This is on postgres 9.3", "msg_date": "Wed, 16 Oct 2013 00:33:56 -0600", "msg_from": "Gavin Wahl <[email protected]>", "msg_from_op": true, "msg_subject": "Planner Conceptual Error when Joining a Subquery -- Outer Query\n Condition not Pulled Into Subquery" }, { "msg_contents": "Gavin Wahl <[email protected]> writes:\n> It'd be nice if the planner could optimize the query (1) by turning it\n> into (2). I understand that it might not be able to, but if it can pull\n> the condition up in (3), why can't it in (4)?\n\n(3) is an instance of seeing \"a = b and b = c\" and deducing \"a = c\" from\nthat. (4) does not follow that pattern, so it's outside the realm of\nwhat the planner can deduce.\n\nIn principle we could take \"a = b and (b = c or b = d)\" and deduce\n\"a = c or a = d\" from that, but it'd be a lot more complication for a\nlot less benefit than what we get from the existing logic.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 16 Oct 2013 10:28:44 +0200", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner Conceptual Error when Joining a Subquery -- Outer Query\n Condition not Pulled Into Subquery" }, { "msg_contents": "> (3) is an instance of seeing \"a = b and b = c\" and deducing \"a = c\" from\n> that. (4) does not follow that pattern, so it's outside the realm of what\nthe\n> planner can deduce.\n\nI see, that makes sense. I assumed there was something more complex going\non in\nthe background. What about converting (1) into (2)? I know the planner does\nsomething kind of similar, in converting\n\n SELECT * FROM (SELECT * FROM x WHERE a) WHERE b\n\ninto\n\n SELECT * FROM (SELECT * FROM x WHERE a AND b)\n\nI guess in this case it would have to know about unique indexes to prove\nthat\nif the primary keys are equal, all the other columns are too. My intention\nin\ntrying (3) was to take that burden of proof off the planner.\n\n> (3) is an instance of seeing \"a = b and b = c\" and deducing \"a = c\" from> that. (4) does not follow that pattern, so it's outside the realm of what the\n> planner can deduce.I see, that makes sense. I assumed there was something more complex going on in\n\nthe background.  What about converting (1) into (2)? I know the planner doessomething kind of similar, in converting  SELECT * FROM (SELECT * FROM x WHERE a) WHERE b\ninto  SELECT * FROM (SELECT * FROM x WHERE a AND b)\n\nI guess in this case it would have to know about unique indexes to prove thatif the primary keys are equal, all the other columns are too. My intention intrying (3) was to take that burden of proof off the planner.", "msg_date": "Wed, 16 Oct 2013 20:45:12 -0600", "msg_from": "Gavin Wahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner Conceptual Error when Joining a Subquery --\n Outer Query Condition not Pulled Into Subquery" } ]
[ { "msg_contents": "All,\n\nI've often seen people lower seq_page_cost for SSD access. This has the\neffect of raising CPU costs relative to the costs of disk access.\nHowever, CPUs have also gotten lots faster, so I'm not sure that results\nin a better cost balance.\n\nHas anyone done extensive testing on this? What did you find?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Oct 2013 11:56:14 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Logic of lowering seq_page_cost for SSD?" } ]
[ { "msg_contents": "I've been working on trying to normalize a table that's got a bunch of text fields. Normalizing the first 4 has been a non-issue. But when I try and normalize 2 additional fields a bunch of query plans go belly-up.\n\nI've seen this on our old 8.4 databases as well as 9.1. I haven't been able to test on anything newer yet.\n\nA representative query is below. Note that this is a stripped down version of something larger, so it seems a bit silly the way it's written, but I've got others that are exhibiting the same bad behavior:\n\nEXPLAIN ANALYZE\nselect * from (\nselect distinct e.login AS Login, d.date1, d.date2,\n (SELECT COUNT(DISTINCT n.customer_id) FROM notes n\n inner join loans l on n.customer_id = l.customer_id\n inner join loan_tasks_committed as ltc on l.id = ltc.loan_id\n AND n.note_time BETWEEN date1 AND date2 + interval '1 day'\n AND n.activity_cd in ('help_draw')\n AND ltc.created_on between n.note_time - interval '10 minutes' and n.note_time + interval '1 day'\n AND getcust(ltc.created_by)=e.login\n where n.employee_id = e.id\n ) AS \"Draw\"\n FROM\n (select current_date-1 as date1, current_date-1 as date2) as d, employees e\n inner join employees_roles e_r on e.id=e_r.employee_id\n WHERE\n e_r.role_id IN (3,23) --Inbound Customer Support Operator or Customer Service Issue Rep\n GROUP BY\n Login,\n e.id,\n date1,\n date2\n ORDER BY Login) a\n WHERE \"Draw\" > 0\n;\n\nloan_tasks_committed is the table that's been normalized. For testing I'm swapping between tables with two normalization views that boil down to:\n\n_by:\n SELECT lt_by.id, lt_by.lock_version, c.id AS created_by_id, c.by AS created_by, u.id AS updated_by_id, u.by AS updated_by, lt_by.created_on, lt_by.updated_on, lt_by.loan_id, lt_by.entered_on, lt_by.acct_date, lt_by.task_amount, lt_by.loan_task_cd, lt_by.parent_id, lt_by.type_cd, e.id AS entered_by_id, e.by AS entered_by, cl.id AS collected_by_id, cl.by AS collected_by, lt_by.currency_cd, lt_by.committed, lt_by.loan_task_code_id\n FROM lt_by\n LEFT JOIN by_text c ON lt_by.created_by_id = c.id\n LEFT JOIN by_text u ON lt_by.updated_by_id = u.id\n LEFT JOIN by_text e ON lt_by.entered_by_id = e.id\n LEFT JOIN by_text cl ON lt_by.collected_by_id = cl.id;\n\n_cd:\n SELECT lt_cd.id, lt_cd.lock_version, c.id AS created_by_id, c.by AS created_by, u.id AS updated_by_id, u.by AS updated_by, lt_cd.created_on, lt_cd.updated_on, lt_cd.loan_id, lt_cd.entered_on, lt_cd.acct_date, lt_cd.task_amount, ltc.id AS loan_task_code_id, ltc.loan_task_code, ltc.loan_task_code AS loan_task_cd, lt_cd.parent_id, tc.id AS loan_task_type_id, tc.loan_task_type, tc.loan_task_type AS type_cd, e.id AS entered_by_id, e.by AS entered_by, cl.id AS collected_by_id, cl.by AS collected_by, lt_cd.currency_cd, lt_cd.committed\n FROM lt_cd\n LEFT JOIN by_text c ON lt_cd.created_by_id = c.id\n LEFT JOIN by_text u ON lt_cd.updated_by_id = u.id\n LEFT JOIN by_text e ON lt_cd.entered_by_id = e.id\n LEFT JOIN by_text cl ON lt_cd.collected_by_id = cl.id\n LEFT JOIN lt_code ltc ON lt_cd.loan_task_code_id = ltc.id\n LEFT JOIN lt_type tc ON lt_cd.loan_task_type_id = tc.id;\n\n\nAs you can see, they're identical except for normalizing 2 additional fields.\n\nThe additional normalization results in a moderate amount of heap space savings:\n\nSELECT table_name, rows, heap_size, index_size, toast_size, total_size FROM tools.space WHERE table_schema='jnasby';\n table_name | rows | heap_size | index_size | toast_size | total_size\n------------------+-------------+------------+------------+------------+------------\n by_text | 721492 | 41 MB | 49 MB | 40 kB | 90 MB\n lt_code | 42 | 8192 bytes | 32 kB | 32 kB | 72 kB\n lt_type | 3 | 8192 bytes | 32 kB | 32 kB | 72 kB\n lt_cd | 9.82601e+07 | 10 GB | 15 GB | 2832 kB | 25 GB\n lt_by | 9.82615e+07 | 12 GB | 21 GB | 3360 kB | 33 GB\n\nI've got full explain analyze below, but the relevant bits are:\n\n_by:\n SubPlan 2\n -> Aggregate (cost=489.06..489.07 rows=1 width=4) (actual time=0.083..0.083 rows=1 loops=692)\n -> Nested Loop (cost=0.00..489.05 rows=1 width=4) (actual time=0.080..0.080 rows=0 loops=692)\n -> Nested Loop (cost=0.00..485.80 rows=10 width=8) (actual time=0.079..0.079 rows=0 loops=692)\n Join Filter: ((jnasby.lt_by.created_on >= (n.note_time - '00:10:00'::interval)) AND (jnasby.lt_by.created_on <= (n.note_time + '1 day'::interval)))\n -> Nested Loop (cost=0.00..383.44 rows=14 width=16) (actual time=0.054..0.059 rows=0 loops=692)\n -> Index Scan using notes_u1 on notes n (cost=0.00..344.23 rows=1 width=12) (actual time=0.053..0.057 rows=0 loops=692)\n Index Cond: ((employee_id = e.id) AND (note_time >= ((('now'::text)::date - 1))) AND (note_time <= (((('now'::text)::date - 1)) + '1 day'::interval)))\n Filter: ((activity_cd)::text = 'help_draw'::text)\n -> Index Scan using loans_m12 on loans l (cost=0.00..38.94 rows=22 width=8) (actual time=0.034..0.042 rows=2 loops=18)\n Index Cond: (customer_id = n.customer_id)\n -> Index Scan using lt_by__loan_id on lt_by (cost=0.00..6.50 rows=36 width=28) (actual time=0.043..0.477 rows=60 loops=28)\n Index Cond: (loan_id = l.id)\n -> Index Scan using by_text_pkey on by_text c (cost=0.00..0.31 rows=1 width=4) (never executed)\n Index Cond: (id = jnasby.lt_by.created_by_id)\n Filter: (\"substring\"((by)::text, '^[CAE]:(.*?) I:'::text) = (e.login)::text)\n\n_cd:\n SubPlan 2\n -> Aggregate (cost=3089331.15..3089331.16 rows=1 width=4) (actual time=372.589..372.589 rows=1 loops=692)\n -> Hash Join (cost=16560.08..3089331.14 rows=1 width=4) (actual time=372.586..372.586 rows=0 loops=692)\n Hash Cond: (jnasby.lt_cd.loan_id = l.id)\n Join Filter: ((jnasby.lt_cd.created_on >= (n.note_time - '00:10:00'::interval)) AND (jnasby.lt_cd.created_on <= (n.note_time + '1 day'::interval)))\n -> Hash Join (cost=16176.47..3087105.37 rows=491238 width=12) (actual time=14610.882..32223.008 rows=2642 loops=8)\n Hash Cond: (jnasby.lt_cd.created_by_id = c.id)\n -> Seq Scan on lt_cd (cost=0.00..2329065.44 rows=98260144 width=32) (actual time=0.018..15123.405 rows=98261588 loops=8)\n -> Hash (cost=16131.38..16131.38 rows=3607 width=4) (actual time=4519.878..4519.878 rows=20 loops=8)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Seq Scan on by_text c (cost=0.00..16131.38 rows=3607 width=4) (actual time=4405.172..4519.861 rows=20 loops=8)\n Filter: (\"substring\"((by)::text, '^[CAE]:(.*?) I:'::text) = (e.login)::text)\n -> Hash (cost=383.44..383.44 rows=14 width=16) (actual time=0.055..0.055 rows=0 loops=692)\n Buckets: 1024 Batches: 1 Memory Usage: 0kB\n -> Nested Loop (cost=0.00..383.44 rows=14 width=16) (actual time=0.051..0.055 rows=0 loops=692)\n -> Index Scan using notes_u1 on notes n (cost=0.00..344.23 rows=1 width=12) (actual time=0.049..0.052 rows=0 loops=692)\n Index Cond: ((employee_id = e.id) AND (note_time >= ((('now'::text)::date - 1))) AND (note_time <= (((('now'::text)::date - 1)) + '1 day'::interval)))\n Filter: ((activity_cd)::text = 'help_draw'::text)\n -> Index Scan using loans_m12 on loans l (cost=0.00..38.94 rows=22 width=8) (actual time=0.033..0.040 rows=2 loops=18)\n Index Cond: (customer_id = n.customer_id)\n\n\nI've tried disabling execution paths to get it to loop join, but no matter what it insists on seqscanning lt_cd. I've verified that simpler queries will use an index scan of loan_id (ie: WHERE loan_id < 999999).\n\nAgain, I've got multiple queries exhibiting this problem... this is NOT directly related to this query.\n\n\n_by: http://explain.depesz.com/s/LYOY\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan on a (cost=843581.41..843600.81 rows=862 width=24) (actual time=64.735..64.735 rows=0 loops=1)\n -> Unique (cost=843581.41..843592.19 rows=862 width=20) (actual time=64.735..64.735 rows=0 loops=1)\n -> Sort (cost=843581.41..843583.57 rows=862 width=20) (actual time=64.733..64.733 rows=0 loops=1)\n Sort Key: e.login, ((('now'::text)::date - 1)), ((('now'::text)::date - 1)), ((SubPlan 1))\n Sort Method: quicksort Memory: 25kB\n -> HashAggregate (cost=380.35..843539.38 rows=862 width=20) (actual time=64.726..64.726 rows=0 loops=1)\n Filter: ((SubPlan 2) > 0)\n -> Hash Join (cost=305.29..371.73 rows=862 width=20) (actual time=4.765..5.866 rows=895 loops=1)\n Hash Cond: (e_r.employee_id = e.id)\n -> Nested Loop (cost=15.18..67.61 rows=862 width=12) (actual time=0.228..0.976 rows=895 loops=1)\n -> Result (cost=0.00..0.03 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)\n -> Bitmap Heap Scan on employees_roles e_r (cost=15.18..58.96 rows=862 width=4) (actual time=0.214..0.782 rows=895 loops=1)\n Recheck Cond: (role_id = ANY ('{3,23}'::integer[]))\n -> Bitmap Index Scan on employees_roles_m1 (cost=0.00..14.97 rows=862 width=0) (actual time=0.195..0.195 rows=895 loops=1)\n Index Cond: (role_id = ANY ('{3,23}'::integer[]))\n -> Hash (cost=247.27..247.27 rows=3427 width=12) (actual time=4.521..4.521 rows=3427 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 154kB\n -> Seq Scan on employees e (cost=0.00..247.27 rows=3427 width=12) (actual time=0.021..3.595 rows=3427 loops=1)\n SubPlan 1\n -> Aggregate (cost=489.06..489.07 rows=1 width=4) (never executed)\n -> Nested Loop (cost=0.00..489.05 rows=1 width=4) (never executed)\n -> Nested Loop (cost=0.00..485.80 rows=10 width=8) (never executed)\n Join Filter: ((jnasby.lt_by.created_on >= (n.note_time - '00:10:00'::interval)) AND (jnasby.lt_by.created_on <= (n.note_time + '1 day'::interval)))\n -> Nested Loop (cost=0.00..383.44 rows=14 width=16) (never executed)\n -> Index Scan using notes_u1 on notes n (cost=0.00..344.23 rows=1 width=12) (never executed)\n Index Cond: ((employee_id = e.id) AND (note_time >= ((('now'::text)::date - 1))) AND (note_time <= (((('now'::text)::date - 1)) + '1 day'::interval)))\n Filter: ((activity_cd)::text = 'help_draw'::text)\n -> Index Scan using loans_m12 on loans l (cost=0.00..38.94 rows=22 width=8) (never executed)\n Index Cond: (customer_id = n.customer_id)\n -> Index Scan using lt_by__loan_id on lt_by (cost=0.00..6.50 rows=36 width=28) (never executed)\n Index Cond: (loan_id = l.id)\n -> Index Scan using by_text_pkey on by_text c (cost=0.00..0.31 rows=1 width=4) (never executed)\n Index Cond: (id = jnasby.lt_by.created_by_id)\n Filter: (\"substring\"((by)::text, '^[CAE]:(.*?) I:'::text) = (e.login)::text)\n SubPlan 2\n -> Aggregate (cost=489.06..489.07 rows=1 width=4) (actual time=0.083..0.083 rows=1 loops=692)\n -> Nested Loop (cost=0.00..489.05 rows=1 width=4) (actual time=0.080..0.080 rows=0 loops=692)\n -> Nested Loop (cost=0.00..485.80 rows=10 width=8) (actual time=0.079..0.079 rows=0 loops=692)\n Join Filter: ((jnasby.lt_by.created_on >= (n.note_time - '00:10:00'::interval)) AND (jnasby.lt_by.created_on <= (n.note_time + '1 day'::interval)))\n -> Nested Loop (cost=0.00..383.44 rows=14 width=16) (actual time=0.054..0.059 rows=0 loops=692)\n -> Index Scan using notes_u1 on notes n (cost=0.00..344.23 rows=1 width=12) (actual time=0.053..0.057 rows=0 loops=692)\n Index Cond: ((employee_id = e.id) AND (note_time >= ((('now'::text)::date - 1))) AND (note_time <= (((('now'::text)::date - 1)) + '1 day'::interval)))\n Filter: ((activity_cd)::text = 'help_draw'::text)\n -> Index Scan using loans_m12 on loans l (cost=0.00..38.94 rows=22 width=8) (actual time=0.034..0.042 rows=2 loops=18)\n Index Cond: (customer_id = n.customer_id)\n -> Index Scan using lt_by__loan_id on lt_by (cost=0.00..6.50 rows=36 width=28) (actual time=0.043..0.477 rows=60 loops=28)\n Index Cond: (loan_id = l.id)\n -> Index Scan using by_text_pkey on by_text c (cost=0.00..0.31 rows=1 width=4) (never executed)\n Index Cond: (id = jnasby.lt_by.created_by_id)\n Filter: (\"substring\"((by)::text, '^[CAE]:(.*?) I:'::text) = (e.login)::text)\n Total runtime: 65.070 ms\n(51 rows)\n\n_cd: http://explain.depesz.com/s/fkc\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan on a (cost=5326007345.03..5326007364.42 rows=862 width=24) (actual time=257838.254..257838.254 rows=0 loops=1)\n -> Unique (cost=5326007345.03..5326007355.80 rows=862 width=20) (actual time=257838.254..257838.254 rows=0 loops=1)\n -> Sort (cost=5326007345.03..5326007347.18 rows=862 width=20) (actual time=257838.254..257838.254 rows=0 loops=1)\n Sort Key: e.login, ((('now'::text)::date - 1)), ((('now'::text)::date - 1)), ((SubPlan 1))\n Sort Method: quicksort Memory: 25kB\n -> HashAggregate (cost=380.35..5326007303.00 rows=862 width=20) (actual time=257838.246..257838.246 rows=0 loops=1)\n Filter: ((SubPlan 2) > 0)\n -> Hash Join (cost=305.29..371.73 rows=862 width=20) (actual time=3.916..4.789 rows=895 loops=1)\n Hash Cond: (e_r.employee_id = e.id)\n -> Nested Loop (cost=15.18..67.61 rows=862 width=12) (actual time=0.208..0.788 rows=895 loops=1)\n -> Result (cost=0.00..0.03 rows=1 width=0) (actual time=0.006..0.006 rows=1 loops=1)\n -> Bitmap Heap Scan on employees_roles e_r (cost=15.18..58.96 rows=862 width=4) (actual time=0.191..0.626 rows=895 loops=1)\n Recheck Cond: (role_id = ANY ('{3,23}'::integer[]))\n -> Bitmap Index Scan on employees_roles_m1 (cost=0.00..14.97 rows=862 width=0) (actual time=0.173..0.173 rows=895 loops=1)\n Index Cond: (role_id = ANY ('{3,23}'::integer[]))\n -> Hash (cost=247.27..247.27 rows=3427 width=12) (actual time=3.696..3.696 rows=3427 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 154kB\n -> Seq Scan on employees e (cost=0.00..247.27 rows=3427 width=12) (actual time=0.012..2.967 rows=3427 loops=1)\n SubPlan 1\n -> Aggregate (cost=3089331.15..3089331.16 rows=1 width=4) (never executed)\n -> Hash Join (cost=16560.08..3089331.14 rows=1 width=4) (never executed)\n Hash Cond: (jnasby.lt_cd.loan_id = l.id)\n Join Filter: ((jnasby.lt_cd.created_on >= (n.note_time - '00:10:00'::interval)) AND (jnasby.lt_cd.created_on <= (n.note_time + '1 day'::interval)))\n -> Hash Join (cost=16176.47..3087105.37 rows=491238 width=12) (never executed)\n Hash Cond: (jnasby.lt_cd.created_by_id = c.id)\n -> Seq Scan on lt_cd (cost=0.00..2329065.44 rows=98260144 width=32) (never executed)\n -> Hash (cost=16131.38..16131.38 rows=3607 width=4) (never executed)\n -> Seq Scan on by_text c (cost=0.00..16131.38 rows=3607 width=4) (never executed)\n Filter: (\"substring\"((by)::text, '^[CAE]:(.*?) I:'::text) = (e.login)::text)\n -> Hash (cost=383.44..383.44 rows=14 width=16) (never executed)\n -> Nested Loop (cost=0.00..383.44 rows=14 width=16) (never executed)\n -> Index Scan using notes_u1 on notes n (cost=0.00..344.23 rows=1 width=12) (never executed)\n Index Cond: ((employee_id = e.id) AND (note_time >= ((('now'::text)::date - 1))) AND (note_time <= (((('now'::text)::date - 1)) + '1 day'::interval)))\n Filter: ((activity_cd)::text = 'help_draw'::text)\n -> Index Scan using loans_m12 on loans l (cost=0.00..38.94 rows=22 width=8) (never executed)\n Index Cond: (customer_id = n.customer_id)\n SubPlan 2\n -> Aggregate (cost=3089331.15..3089331.16 rows=1 width=4) (actual time=372.589..372.589 rows=1 loops=692)\n -> Hash Join (cost=16560.08..3089331.14 rows=1 width=4) (actual time=372.586..372.586 rows=0 loops=692)\n Hash Cond: (jnasby.lt_cd.loan_id = l.id)\n Join Filter: ((jnasby.lt_cd.created_on >= (n.note_time - '00:10:00'::interval)) AND (jnasby.lt_cd.created_on <= (n.note_time + '1 day'::interval)))\n -> Hash Join (cost=16176.47..3087105.37 rows=491238 width=12) (actual time=14610.882..32223.008 rows=2642 loops=8)\n Hash Cond: (jnasby.lt_cd.created_by_id = c.id)\n -> Seq Scan on lt_cd (cost=0.00..2329065.44 rows=98260144 width=32) (actual time=0.018..15123.405 rows=98261588 loops=8)\n -> Hash (cost=16131.38..16131.38 rows=3607 width=4) (actual time=4519.878..4519.878 rows=20 loops=8)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Seq Scan on by_text c (cost=0.00..16131.38 rows=3607 width=4) (actual time=4405.172..4519.861 rows=20 loops=8)\n Filter: (\"substring\"((by)::text, '^[CAE]:(.*?) I:'::text) = (e.login)::text)\n -> Hash (cost=383.44..383.44 rows=14 width=16) (actual time=0.055..0.055 rows=0 loops=692)\n Buckets: 1024 Batches: 1 Memory Usage: 0kB\n -> Nested Loop (cost=0.00..383.44 rows=14 width=16) (actual time=0.051..0.055 rows=0 loops=692)\n -> Index Scan using notes_u1 on notes n (cost=0.00..344.23 rows=1 width=12) (actual time=0.049..0.052 rows=0 loops=692)\n Index Cond: ((employee_id = e.id) AND (note_time >= ((('now'::text)::date - 1))) AND (note_time <= (((('now'::text)::date - 1)) + '1 day'::interval)))\n Filter: ((activity_cd)::text = 'help_draw'::text)\n -> Index Scan using loans_m12 on loans l (cost=0.00..38.94 rows=22 width=8) (actual time=0.033..0.040 rows=2 loops=18)\n Index Cond: (customer_id = n.customer_id)\n Total runtime: 257838.571 ms\n(57 rows)\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 28 Oct 2013 16:58:15 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with hash join over nested loop" }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> I've been working on trying to normalize a table that's got a bunch of text fields. Normalizing the first 4 has been a non-issue. But when I try and normalize 2 additional fields a bunch of query plans go belly-up.\n\nTry increasing join_collapse_limit/from_collapse_limit. I'm a bit\nconfused by your description but I think maybe you've got more than 8\nrelations in the subqueries.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 28 Oct 2013 19:13:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "On 10/28/13 6:13 PM, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n>> I've been working on trying to normalize a table that's got a bunch of text fields. Normalizing the first 4 has been a non-issue. But when I try and normalize 2 additional fields a bunch of query plans go belly-up.\n>\n> Try increasing join_collapse_limit/from_collapse_limit. I'm a bit\n> confused by your description but I think maybe you've got more than 8\n> relations in the subqueries.\n\nHell, never thought about that. Bumping it up did the trick. Thanks!\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 28 Oct 2013 18:53:03 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "On Mon, Oct 28, 2013 at 6:13 PM, Tom Lane <[email protected]> wrote:\n> Jim Nasby <[email protected]> writes:\n>> I've been working on trying to normalize a table that's got a bunch of text fields. Normalizing the first 4 has been a non-issue. But when I try and normalize 2 additional fields a bunch of query plans go belly-up.\n>\n> Try increasing join_collapse_limit/from_collapse_limit. I'm a bit\n> confused by your description but I think maybe you've got more than 8\n> relations in the subqueries.\n\nHm -- wondering out loud if there would be any value in terms of\ndecorating explain output when that limit was hit and if it's\npractical to do so...\n\nmerlni\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 09:10:43 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "On 10/29/13 9:10 AM, Merlin Moncure wrote:\n> On Mon, Oct 28, 2013 at 6:13 PM, Tom Lane <[email protected]> wrote:\n>> Jim Nasby <[email protected]> writes:\n>>> I've been working on trying to normalize a table that's got a bunch of text fields. Normalizing the first 4 has been a non-issue. But when I try and normalize 2 additional fields a bunch of query plans go belly-up.\n>>\n>> Try increasing join_collapse_limit/from_collapse_limit. I'm a bit\n>> confused by your description but I think maybe you've got more than 8\n>> relations in the subqueries.\n>\n> Hm -- wondering out loud if there would be any value in terms of\n> decorating explain output when that limit was hit and if it's\n> practical to do so...\n\nI think the community would *love* any method of noting potential performance problems. Hitting the GEQO limit fits in there as well. We could eventually warn about other things as well, like going just over work_mem or seqscanning a big table for a small number of rows.\n\nI'm also wondering if it's time to raise those limits. I constructed a somewhat contrived test query in our schema to test this. This is a legitimate join path for our schema... I can't see why someone would use the *full* path, but smaller sections are definitely in use. It's basically all joins, with one simple filter on top of that.\n\nI'd rather not share the actual query or plan, but:\n\ngrep -i scan temp.txt |wc -l\n28\n\nAll tests done via EXPLAIN ... in psql with \\timing turned on. I ignored obvious outliers... margin of error is ~5% from what I saw:\n\nDefault config:\t\t\t21ms\ngeqo = off:\t\t\t19ms\ngeqo off, from_collapse = 99:\t19ms\nfrom_collapse_limit = 99:\t21ms\njoin_collapse_limit = 99:\t171ms\nboth = 99:\t\t\t176ms\ngeqo off, join_collapse = 99\t1.2s\nboth + geqo = off:\t\t1.2s\n\nObviously there's cases where 1.2 seconds of planning time will kill you... but if you're that time sensitive and using 28 tables I think it's reasonable to expect people to do some hand tuning! :)\n\nConversely, where you are likely to get to that sheer number of tables is when you're doing something that's going to take a non-trivial amount of time to execute. In this particular case, if I limit the query to a single row (via blah_id = 2, not via limit), it takes ~2ms to execute when cached with full optimization (interestingly, planning time was at about 926ms at that point).\n\nNow that looks horrible... 926ms to plan a query that takes 2ms to return. But I'm not even going to bother with the 20ms plan, because it's going to take minutes if not HOURS to run (it's just full scanning everything it can find).\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 11:21:26 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> I'm also wondering if it's time to raise those limits.\n\nYeah, possibly. The current default values were set on machines much\nsmaller/slower than most current hardware.\n\nI think also that the collapse limits were invented mainly to keep people\nout of GEQO's clutches, but we've made some significant fixes in GEQO\nsince then. Maybe the real answer is to make the default collapse limits\nmuch higher, and lower geqo_threshold to whatever we think the threshold\nof pain is for applying the regular planner.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 12:45:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "On 10/29/13 11:45 AM, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n>> I'm also wondering if it's time to raise those limits.\n>\n> Yeah, possibly. The current default values were set on machines much\n> smaller/slower than most current hardware.\n>\n> I think also that the collapse limits were invented mainly to keep people\n> out of GEQO's clutches, but we've made some significant fixes in GEQO\n> since then. Maybe the real answer is to make the default collapse limits\n> much higher, and lower geqo_threshold to whatever we think the threshold\n> of pain is for applying the regular planner.\n\nIn my test case geqo does seem to do a good job. I'll see if I can get some data on how number of relations affects planning time... I don't get much of a warm fuzzy about lowering geqo...\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 11:52:32 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> On 10/29/13 11:45 AM, Tom Lane wrote:\n>> Jim Nasby <[email protected]> writes:\n>>> I'm also wondering if it's time to raise those limits.\n\n>> Yeah, possibly. The current default values were set on machines much\n>> smaller/slower than most current hardware.\n>> \n>> I think also that the collapse limits were invented mainly to keep people\n>> out of GEQO's clutches, but we've made some significant fixes in GEQO\n>> since then. Maybe the real answer is to make the default collapse limits\n>> much higher, and lower geqo_threshold to whatever we think the threshold\n>> of pain is for applying the regular planner.\n\n> In my test case geqo does seem to do a good job. I'll see if I can get some data on how number of relations affects planning time... I don't get much of a warm fuzzy about lowering geqo...\n\nYeah, it's probably not that simple. A trawl through the archives\nreminded me that we've discussed this quite a bit in the past already.\nThe collapse limits are important for the regular planner not only to\nlimit runtime but also to limit planner memory consumption; moreover,\nGEQO doesn't behave all that well either with very large join problems.\nThese facts killed a proposal back in 2009 to remove the collapse limits\naltogether. There was also some discussion in 2011, see thread here:\nhttp://www.postgresql.org/message-id/[email protected]\nbut the general feeling seemed to be that we needed more planner\ninfrastructure work first. In particular it seems like the best way\nforward might require limiting subproblem size using something more\nsophisticated than just \"number of relations\".\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 14:20:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "On 10/29/13 1:20 PM, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n>> On 10/29/13 11:45 AM, Tom Lane wrote:\n>>> Jim Nasby <[email protected]> writes:\n>>>> I'm also wondering if it's time to raise those limits.\n>\n>>> Yeah, possibly. The current default values were set on machines much\n>>> smaller/slower than most current hardware.\n>>>\n>>> I think also that the collapse limits were invented mainly to keep people\n>>> out of GEQO's clutches, but we've made some significant fixes in GEQO\n>>> since then. Maybe the real answer is to make the default collapse limits\n>>> much higher, and lower geqo_threshold to whatever we think the threshold\n>>> of pain is for applying the regular planner.\n>\n>> In my test case geqo does seem to do a good job. I'll see if I can get some data on how number of relations affects planning time... I don't get much of a warm fuzzy about lowering geqo...\n>\n> Yeah, it's probably not that simple. A trawl through the archives\n> reminded me that we've discussed this quite a bit in the past already.\n> The collapse limits are important for the regular planner not only to\n> limit runtime but also to limit planner memory consumption; moreover,\n> GEQO doesn't behave all that well either with very large join problems.\n> These facts killed a proposal back in 2009 to remove the collapse limits\n> altogether. There was also some discussion in 2011, see thread here:\n> http://www.postgresql.org/message-id/[email protected]\n> but the general feeling seemed to be that we needed more planner\n> infrastructure work first. In particular it seems like the best way\n> forward might require limiting subproblem size using something more\n> sophisticated than just \"number of relations\".\n\nYeah, I saw one mention of 1GB... that's a bit disconcerting.\n\nIs there a way to measure memory consumption during planning, short of something like strace? (I've got no dev tools available on our servers.)\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 15:08:28 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> Is there a way to measure memory consumption during planning, short of something like strace? (I've got no dev tools available on our servers.)\n\nNothing built-in, I'm pretty sure. You could probably add some\ninstrumentation, but that would require running modified executables ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 16:36:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "On 10/29/13 3:36 PM, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n>> Is there a way to measure memory consumption during planning, short of something like strace? (I've got no dev tools available on our servers.)\n>\n> Nothing built-in, I'm pretty sure. You could probably add some\n> instrumentation, but that would require running modified executables ...\n\nFYI, client_min_messages = debug5 and log_planner_stats = on is useful, though I wish it included ru_maxrss (see http://www.gnu.org/software/libc/manual/html_node/Resource-Usage.html).\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Oct 2013 17:27:57 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "On 10/30/13 5:27 PM, Jim Nasby wrote:\n> On 10/29/13 3:36 PM, Tom Lane wrote:\n>> Jim Nasby <[email protected]> writes:\n>>> Is there a way to measure memory consumption during planning, short of something like strace? (I've got no dev tools available on our servers.)\n>>\n>> Nothing built-in, I'm pretty sure. You could probably add some\n>> instrumentation, but that would require running modified executables ...\n>\n> FYI, client_min_messages = debug5 and log_planner_stats = on is useful, though I wish it included ru_maxrss (see http://www.gnu.org/software/libc/manual/html_node/Resource-Usage.html).\n\nOh, and in my 28 table case ru_minflt was 428 4k memory pages (1.7MB). Not a great measurement, but better than nothing. I didn't detect anything noticeable on vmstat either, so I don't think the consumption is huge (an email in the older thread mentioned 1GB... I'm not seeing that).\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Oct 2013 17:32:06 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with hash join over nested loop" }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> Oh, and in my 28 table case ru_minflt was 428 4k memory pages (1.7MB). Not a great measurement, but better than nothing. I didn't detect anything noticeable on vmstat either, so I don't think the consumption is huge (an email in the older thread mentioned 1GB... I'm not seeing that).\n\nNote that what matters here is not so much the number of base relations as\nthe number of possible join paths, which varies quite a lot depending on\nthe given and deduced join conditions. Worst-case is where join\nconditions are available between any two base relations, which is not\na real common case.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Oct 2013 19:06:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with hash join over nested loop" } ]
[ { "msg_contents": "Hi folks,\n\nWe're adding a foreign key constraint to a 20-million row table on our\nproduction database, and it's taking about 7 minutes. Because it's an\nALTER TABLE, Postgres acquires an ACCESS EXCLUSIVE lock that prevents\nany reads/writes (though this particular table is very write-heavy, so\neven a read lock wouldn't help here).\n\nFor context: we do this whenever we deploy our site, because our\ndatabase is split across two schemas (\"live\" and \"content\"), and the\n\"content\" schema we dump from our office database and restore into our\nproduction database. To achieve this we restore it as \"contentnew\"\ninto the production db, then rename the \"content\" schema to\n\"contentold\" and the \"contentnew\" schema to \"content\".\n\nThis completes the actual deployment, however, now our live-to-content\nforeign keys are pointing to \"contentold\", so the final step is to go\nthrough and drop all the live-to-content foreign keys and recreate\nthem (against the new content schema). Most of the tables are small\nand re-adding the constraint is quick, except for this one table,\nwhich is 20M rows and basically pauses our live website for 7 minutes.\n\nA couple of questions about the ADD CONSTRAINT. The foreign key column\non the local table is indexed, and there are only ~50 unique values,\nso the db *could* come up with the unique values pretty quickly and\nthen check them. Or, even if it needs to do a full scan of the 20M-big\ntable (\"ratesrequests\") and join with the referenced table\n(\"provider\") on the foreign key, which is I think the most it should\nhave to do to check the foreign key, the following query only takes\n~20s, not 7 minutes:\n\nselect p.name\nfrom ratesrequests r\njoin provider p on r.providerid = p.providerid\n\nI'm guessing the ADD CONSTRAINT logic bypasses some of the query\noptimization used for SELECT queries. So I suppose my questions are:\n\n1) Are there ways to speed up adding the constraint? Just speeding it\nup a little bit won't really help -- for this purpose it'll need to be\nan order of magnitude or so. I'm aware of a couple of possibilities:\n\na) Upgrade to Postgres 9.1 and use ADD CONSTRAINT NOT VALID. However,\nthis doesn't really help, as you need to run VALIDATE CONSTRAINT at\nsome later stage, which still grabs the exclusive lock.\n\nb) Delete old rows from the table so it's not so big. Feels a bit\nhacky just to fix this issue.\n\nc) Get rid of this foreign key constraint entirely and just check it\nin code when we insert. Pragmatic solution, but not ideal.\n\n2) Is there a better way to do the \"content\" schema dump/restore that\navoids dropping and recreating the inter-schema foreign keys?\n\nOther notes and research:\n\n* We're running \"PostgreSQL 9.0.2, compiled by Visual C++ build 1500,\n64-bit\" on 64-bit Windows Server 2008 SP1 (6.0.6001)\n* The \"ratesrequests\" table has two text columns, one of which often\ncontains a few hundred to a couple of KB of data in the field. It is\nadded to rapidly. We regularly VACCUM ANALYZE it.\n* As expected, the ADD CONSTRAINT has gotten slower over time as this\ntable grew. However -- I'm not 100% sure of this, but it seems to have\njumped recently (from 3-4 minutes to 7 minutes).\n* http://www.postgresql.org/message-id/[email protected]\n-- indicates that ADD CONSTRAINT isn't optimized as well as it could\nbe\n* http://www.postgresql.org/message-id/[email protected] --\nindicates that the db ignores the index when add constraints\n\nThanks,\nBen.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Oct 2013 10:27:24 +1300", "msg_from": "Ben Hoyt <[email protected]>", "msg_from_op": true, "msg_subject": "Adding foreign key constraint holds exclusive lock for too long (on\n production database)" }, { "msg_contents": "Ben Hoyt wrote\n> * http://www.postgresql.org/message-id/\n\n> 51A11C97.90209@\n\n> --\n> indicates that the db ignores the index when add constraints\n\nAs noted in the referenced thread (and never contradicted) the current\nalgorithm is \"for each record does the value in the FK column exist in the\nPK table?\" not \"do all of the values currently found on the FK table exist\nin the PK table?\". The later question being seemingly much faster (if table\nstatistics imply a small-ish number of bins and the presence of an index on\nthe column) to answer during a bulk ALTER TABLE but the former being the\nmore common question - when simply adding a single row.\n\nYou need to figure out some way to avoid continually evaluating the FK\nconstraint on all 20M row - of which most of them already were previously\nconfirmed. Most commonly people simply perform an incremental update of a\nlive table and insert/update/delete only the records that are changing\ninstead of replacing an entire table with a new one. If you are generally\nhappy with your current procedure I would probably continue on with your\n\"live\" and \"content\" schemas but move this table into a \"bulk_content\"\nschema and within that have a \"live\" table and a \"staging\" table. You can\ndrop/replace the staging table from your office database and then write a\nroutine to incrementally update the live table. The FK references in live\nand content would then persistently reference the \"live\" table and only the\nsubset of changes introduced would need to be checked.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Adding-foreign-key-constraint-holds-exclusive-lock-for-too-long-on-production-database-tp5776313p5776315.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 17:18:09 -0700 (PDT)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key constraint holds exclusive lock for too long\n (on production database)" }, { "msg_contents": "David Johnston <[email protected]> writes:\n> As noted in the referenced thread (and never contradicted) the current\n> algorithm is \"for each record does the value in the FK column exist in the\n> PK table?\" not \"do all of the values currently found on the FK table exist\n> in the PK table?\".\n\nWell, apparently nobody who knows the code was paying attention, because\nthat hasn't been true for some time. ALTER TABLE ADD FOREIGN KEY will\nactually validate the constraint using a query constructed like this\n(cf RI_Initial_Check() in ri_triggers.c):\n\n\t *\tSELECT fk.keycols FROM ONLY relname fk\n\t *\t LEFT OUTER JOIN ONLY pkrelname pk\n\t *\t ON (pk.pkkeycol1=fk.keycol1 [AND ...])\n\t *\t WHERE pk.pkkeycol1 IS NULL AND\n\t * For MATCH SIMPLE:\n\t *\t (fk.keycol1 IS NOT NULL [AND ...])\n\t * For MATCH FULL:\n\t *\t (fk.keycol1 IS NOT NULL [OR ...])\n\nIt appears the possible explanations for Ben's problem are:\n\n1. For some reason this query is a lot slower than the one he came up\nwith;\n\n2. The code isn't using this query but is falling back to a row-at-a-time\ncheck.\n\nCase 2 would apply if the user attempting to do the ALTER TABLE doesn't\nhave read permission on both tables ... though that seems rather unlikely.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Oct 2013 20:30:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Adding foreign key constraint holds exclusive lock for too\n long (on production database)" }, { "msg_contents": "Ben,\n\n> A couple of questions about the ADD CONSTRAINT. The foreign key column\n> on the local table is indexed, and there are only ~50 unique values,\n> so the db *could* come up with the unique values pretty quickly and\n> then check them.\n\nThis would indeed be a nice optimization, especially now that we have\nindex-only scans; you could do a VACUUM FREEZE on the tables and then\nadd the constraint.\n\n> b) Delete old rows from the table so it's not so big. Feels a bit\n> hacky just to fix this issue.\n> \n> c) Get rid of this foreign key constraint entirely and just check it\n> in code when we insert. Pragmatic solution, but not ideal.\n\nd) add a trigger instead of an actual FK. Slower to execute on\nsubsequent updates/inserts, but doesn't need to be checked.\n\ne) do something (slony, scripts, whatever) so that you're incrementally\nupdating this table instead of recreating it from scratch each time.\n\n> * We're running \"PostgreSQL 9.0.2, compiled by Visual C++ build 1500,\n> 64-bit\" on 64-bit Windows Server 2008 SP1 (6.0.6001)\n\nI will point out that you are missing a whole ton of bug fixes,\nincluding two critical security patches.\n\n> * The \"ratesrequests\" table has two text columns, one of which often\n> contains a few hundred to a couple of KB of data in the field. It is\n> added to rapidly. We regularly VACCUM ANALYZE it.\n> * As expected, the ADD CONSTRAINT has gotten slower over time as this\n> table grew. However -- I'm not 100% sure of this, but it seems to have\n> jumped recently (from 3-4 minutes to 7 minutes).\n> * http://www.postgresql.org/message-id/[email protected]\n\nProbably the table just got larger than RAM.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Oct 2013 11:20:30 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding foreign key constraint holds exclusive lock\n for too long (on production database)" }, { "msg_contents": "Thanks, Tom (and David and Josh).\n\n> Well, apparently nobody who knows the code was paying attention, because\n> that hasn't been true for some time. ALTER TABLE ADD FOREIGN KEY will\n> actually validate the constraint using a query constructed like this\n> (cf RI_Initial_Check() in ri_triggers.c):\n\nThis was a very helpful pointer, and interesting to me, because I did\na quick look for the source that handled that but didn't find it (not\nknowing the Postgres codebase at all). It was kinda weird to me at\nfirst that the way it implements this is by building an SQL string and\nthen executing that -- at first I would have thought it'd call the\ninternal functions to do the job. But on second thoughts, this makes\ntotal sense, as that way it gets all the advantages of the query\nplanner/optimizer for this too.\n\n> It appears the possible explanations for Ben's problem are:\n>\n> 1. For some reason this query is a lot slower than the one he came up\n> with;\n>\n> 2. The code isn't using this query but is falling back to a row-at-a-time\n> check.\n\nAnyway, it's definitely #1 that's happening, as I build the\nRI_Initial_Check() query by hand, and it takes just as long as the ADD\nCONSTRAINT.\n\nI'll probably hack around it -- in fact, for now I've just dropped the\ncontraint entirely, as it's not really necessary on this table.\n\nSo I guess this is really a side effect of the quirky way we're\ndumping and restoring only one schema, and dropping/re-adding\nconstraints on deployment because of this. Is this a really strange\nthing to do -- deploying only one schema (the \"static\" data) and\ndropping/re-adding constraints -- or are there better practices here?\n\nRelatedly, what about best practices regarding inter-schema foreign keys?\n\n-Ben\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Oct 2013 21:01:40 +1300", "msg_from": "Ben Hoyt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Adding foreign key constraint holds exclusive lock\n for too long (on production database)" }, { "msg_contents": "Ben Hoyt <[email protected]> writes:\n>> It appears the possible explanations for Ben's problem are:\n>> 1. For some reason this query is a lot slower than the one he came up\n>> with;\n\n> Anyway, it's definitely #1 that's happening, as I build the\n> RI_Initial_Check() query by hand, and it takes just as long as the ADD\n> CONSTRAINT.\n\nHuh. Maybe an optimizer failing? Could we see the full text of both\nqueries and EXPLAIN ANALYZE results for them?\n\n> So I guess this is really a side effect of the quirky way we're\n> dumping and restoring only one schema, and dropping/re-adding\n> constraints on deployment because of this. Is this a really strange\n> thing to do -- deploying only one schema (the \"static\" data) and\n> dropping/re-adding constraints -- or are there better practices here?\n\nDoesn't seem unreasonable. One thought is that maybe you need to insert a\nmanual ANALYZE after reloading the data?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 31 Oct 2013 10:19:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Adding foreign key constraint holds exclusive lock for too\n long (on production database)" }, { "msg_contents": "Hmm, weird -- now the RI_Initial_Check() query is much quicker (20s). We do\nANALYZE the data every few nights, so maybe that's what changed it. I'll\nkeep that in mind. -Ben\n\n\nOn Fri, Nov 1, 2013 at 3:19 AM, Tom Lane <[email protected]> wrote:\n\n> Ben Hoyt <[email protected]> writes:\n> >> It appears the possible explanations for Ben's problem are:\n> >> 1. For some reason this query is a lot slower than the one he came up\n> >> with;\n>\n> > Anyway, it's definitely #1 that's happening, as I build the\n> > RI_Initial_Check() query by hand, and it takes just as long as the ADD\n> > CONSTRAINT.\n>\n> Huh. Maybe an optimizer failing? Could we see the full text of both\n> queries and EXPLAIN ANALYZE results for them?\n>\n> > So I guess this is really a side effect of the quirky way we're\n> > dumping and restoring only one schema, and dropping/re-adding\n> > constraints on deployment because of this. Is this a really strange\n> > thing to do -- deploying only one schema (the \"static\" data) and\n> > dropping/re-adding constraints -- or are there better practices here?\n>\n> Doesn't seem unreasonable. One thought is that maybe you need to insert a\n> manual ANALYZE after reloading the data?\n>\n> regards, tom lane\n>\n\nHmm, weird -- now the RI_Initial_Check() query is much quicker (20s). We do ANALYZE the data every few nights, so maybe that's what changed it. I'll keep that in mind. -Ben\nOn Fri, Nov 1, 2013 at 3:19 AM, Tom Lane <[email protected]> wrote:\nBen Hoyt <[email protected]> writes:\n>> It appears the possible explanations for Ben's problem are:\n>> 1. For some reason this query is a lot slower than the one he came up\n>> with;\n\n> Anyway, it's definitely #1 that's happening, as I build the\n> RI_Initial_Check() query by hand, and it takes just as long as the ADD\n> CONSTRAINT.\n\nHuh.  Maybe an optimizer failing?  Could we see the full text of both\nqueries and EXPLAIN ANALYZE results for them?\n\n> So I guess this is really a side effect of the quirky way we're\n> dumping and restoring only one schema, and dropping/re-adding\n> constraints on deployment because of this. Is this a really strange\n> thing to do -- deploying only one schema (the \"static\" data) and\n> dropping/re-adding constraints -- or are there better practices here?\n\nDoesn't seem unreasonable.  One thought is that maybe you need to insert a\nmanual ANALYZE after reloading the data?\n\n                        regards, tom lane", "msg_date": "Fri, 1 Nov 2013 12:32:43 +1300", "msg_from": "Ben Hoyt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Adding foreign key constraint holds exclusive lock\n for too long (on production database)" } ]
[ { "msg_contents": "I'm not sure if its suppose to be under general so please let me know if I\nneed to move it to another topic area please.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/postgres-connections-tp5776349p5776383.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Oct 2013 07:16:32 -0700 (PDT)", "msg_from": "si24 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres connections" } ]
[ { "msg_contents": "if we have the following trigger:CREATE TRIGGER admin_update_trigger BEFORE UPDATE ON admin_logger_overflow FOR EACH ROW  WHEN ((old.start_date_time IS DISTINCT FROM new.start_date_time))  EXECUTE PROCEDURE update_logger_config();and the database call issues an: update admin_logger_overflow set stop_date_time = '2013-10-31 15:00:00'::timestamp where admin_update_id = 1; Does the trigger fire? No, Right?if the next database call issues an: update admin_logger_overflow set start_date_time = '2013-10-31 13:59:58'::timestamp where admin_update_id = 1; Does the trigger fire? Yes, No doubtbut if the very next database call issues an: update admin_logger_overflow set \nstart_date_time = '2013-10-31 13:59:58'::timestamp, stop_date_time = '2013-10-31 16:29:37'::timestamp where admin_update_id\n = 1; where the \nstart_date_time timestamp value is identical to the one in the prior update statement, is it true that the admin_update_trigger is still being fired because the WHEN IS DISTINCT FROM condition still has to be evaluated and depending upon its condition the determination is made if the EXECUTE PROCEDURE call is going to happen or not? Yes, Right?We have processes that perform thousands and thousands of these updates and these data ingest processes are taking a measurable performance hit when the trigger is being fired repeatedly, as opposed to when this trigger is removed from the ingest workflow. Does removing the start_date_time column from the update column list when the value is redundant circumvent the trigger call from happening, and thus reducing the performance hit on these update statements?thanks\n", "msg_date": "Thu, 31 Oct 2013 15:27:10 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Update Trigger latency utilizing the IS DISTINCT FROM syntax" } ]
[ { "msg_contents": "http://vipglobalcp.com/ekroe/dzbyvvuhprtdsjcruzrsjnkfrxiad.php \n\n\n\n\n\n\n\n\n\n\n Ioana Danes \n\n\n\n\n\n\n\n\n\n 11/1/2013 4:50:50 PM\n http://vipglobalcp.com/ekroe/dzbyvvuhprtdsjcruzrsjnkfrxiad.php http://vipglobalcp.com/ekroe/dzbyvvuhprtdsjcruzrsjnkfrxiad.php Ioana Danes 11/1/2013 4:50:50 PM 11/1/2013 4:50:50 PM", "msg_date": "Fri, 1 Nov 2013 08:50:51 -0700 (PDT)", "msg_from": "Ioana Danes <[email protected]>", "msg_from_op": true, "msg_subject": "how are you?" } ]
[ { "msg_contents": "Please help with advice!\n\nServer\nHP ProLiant BL460c G1\n\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 8\nOn-line CPU(s) list: 0-7\nThread(s) per core: 1\nCore(s) per socket: 4\nCPU socket(s): 2\nNUMA node(s): 1\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 23\nStepping: 6\nCPU MHz: 3000.105\nBogoMIPS: 6000.04\nVirtualization: VT-x\nL1d cache: 32K\nL1i cache: 32K\nL2 cache: 6144K\nNUMA node0 CPU(s): 0-7\n\n32GB RAM\n[root@db3 ~]# numactl --hardware\navailable: 1 nodes (0)\nnode 0 cpus: 0 1 2 3 4 5 6 7\nnode 0 size: 32765 MB\nnode 0 free: 317 MB\nnode distances:\nnode 0\n 0: 10\n\n\nRAID1 2x146GB 10k rpm\n\nCentOS release 6.3 (Final)\nLinux 2.6.32-279.11.1.el6.x86_64 #1 SMP x86_64 GNU/Linux\n\n\nkernel.msgmnb = 65536\nkernel.msgmax = 65536\nkernel.shmmax = 68719476736\nkernel.shmall = 4294967296\nvm.swappiness = 30\nvm.dirty_background_bytes = 67108864\nvm.dirty_bytes = 536870912\n\n\nPostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n20120305 (Red Hat 4.4.6-4), 64-bit\n\nlisten_addresses = '*'\nport = 5433\nmax_connections = 350\nshared_buffers = 8GB\ntemp_buffers = 64MB\nmax_prepared_transactions = 350\nwork_mem = 256MB\nmaintenance_work_mem = 1GB\nmax_stack_depth = 4MB\nmax_files_per_process = 5000\neffective_io_concurrency = 2\nwal_level = hot_standby\nsynchronous_commit = off\ncheckpoint_segments = 64\ncheckpoint_timeout = 15min\ncheckpoint_completion_target = 0.75\nmax_wal_senders = 4\nwal_sender_delay = 100ms\nwal_keep_segments = 128\nrandom_page_cost = 3.0\neffective_cache_size = 18GB\nautovacuum = on\nautovacuum_max_workers = 5\nautovacuum_vacuum_threshold = 900\nautovacuum_analyze_threshold = 350\nautovacuum_vacuum_scale_factor = 0.1\nautovacuum_analyze_scale_factor = 0.05\nlog_min_duration_statement = 500\ndeadlock_timeout = 1s\n\n\nDB size is about 20GB. There is no high write activity on DB. But\nperiodically in postgresql log i see for example: \"select 1\" duration is\nabout 500-1000 ms.\n\nIn this period of time response time from db terribly. This period of time not\nbound with high traffic. It is not other app on the server. There is not\nspecific cron job on server.\n\nOur app written on java and use jdbc to connect to DB and internal pooling.\nThere is about 100 connection to DB. This is sar output:\n\n12:00:01 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s\npgscand/s pgsteal/s %vmeff\n09:30:01 PM 73.17 302.72 134790.16 0.00 46809.73\n0.00 0.00 0.00 0.00\n09:35:01 PM 63.42 655.80 131740.74 0.00 46182.74\n0.00 0.00 0.00 0.00\n09:40:01 PM 76.87 400.62 122375.34 0.00 42096.27\n0.00 0.00 0.00 0.00\n09:45:01 PM 58.49 198.33 121922.86 0.00 42765.27\n0.00 0.00 0.00 0.00\n09:50:01 PM 52.21 485.45 136775.65 0.15 49098.65\n0.00 0.00 0.00 0.00\n09:55:01 PM 49.68 476.75 130159.24 0.00 45192.54\n0.00 0.00 0.00 0.00\n10:00:01 PM 41.35 295.34 118655.80 0.00 40786.52\n0.00 0.00 0.00 0.00\n10:05:01 PM 60.84 593.85 129890.83 0.00 44170.92\n0.00 0.00 0.00 0.00\n10:10:01 PM 52.08 471.36 132773.63 0.00 46019.13\n0.00 2.41 2.41 100.00\n10:15:01 PM 73.93 196.50 129384.21 0.33 45255.76\n65.92 1.19 66.87 99.64\n10:20:02 PM 70.35 473.16 121940.38 0.11 44061.52 81.95\n37.79 119.42 99.73\n10:25:01 PM 57.84 471.69 130583.33 0.01 46093.33\n0.00 0.00 0.00 0.00\n10:30:01 PM 52.91 321.62 119264.34 0.01 41748.19\n0.00 0.00 0.00 0.00\n10:35:01 PM 47.13 451.78 114625.62 0.02 40600.98\n0.00 0.00 0.00 0.00\n10:40:01 PM 48.96 472.41 102352.79 0.00 35402.17\n0.00 0.00 0.00 0.00\n10:45:01 PM 70.07 321.33 121423.02 0.00 43052.04\n0.00 0.00 0.00 0.00\n10:50:01 PM 46.78 479.95 128938.09 0.02 37864.07 116.64\n48.97 165.07 99.67\n10:55:02 PM 104.84 453.55 109189.98 0.00 37583.50\n0.00 0.00 0.00 0.00\n11:00:01 PM 46.23 248.75 107313.26 0.00 37278.10\n0.00 0.00 0.00 0.00\n11:05:01 PM 44.28 446.41 115598.61 0.01 40070.61\n0.00 0.00 0.00 0.00\n11:10:01 PM 38.86 457.32 100240.71 0.00 34407.29\n0.00 0.00 0.00 0.00\n11:15:01 PM 48.23 275.60 104780.84 0.00 36183.84\n0.00 0.00 0.00 0.00\n11:20:01 PM 92.74 432.49 114698.74 0.01 40413.14\n0.00 0.00 0.00 0.00\n11:25:01 PM 42.76 428.50 87769.28 0.00 29379.87\n0.00 0.00 0.00 0.00\n11:30:01 PM 36.83 260.34 85072.46 0.00 28234.50\n0.00 0.00 0.00 0.00\n11:35:01 PM 62.52 481.56 93150.67 0.00 31137.13\n0.00 0.00 0.00 0.00\n11:40:01 PM 43.50 459.10 90407.34 0.00 30241.70\n0.00 0.00 0.00 0.00\n\n12:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit\n%commit\n09:30:01 PM 531792 32345400 98.38 475504 29583340 10211064\n27.62\n09:35:01 PM 512096 32365096 98.44 475896 29608660 10200916\n27.59\n09:40:01 PM 455584 32421608 98.61 476276 29638952 10211652\n27.62\n09:45:01 PM 425744 32451448 98.71 476604 29662384 10206044\n27.60\n09:50:01 PM 380960 32496232 98.84 477004 29684296 10243704\n27.71\n09:55:01 PM 385644 32491548 98.83 477312 29706940 10204776\n27.60\n10:00:01 PM 348604 32528588 98.94 477672 29725476 10228984\n27.67\n10:05:01 PM 279216 32597976 99.15 478104 29751016 10281748\n27.81\n10:10:01 PM 255168 32622024 99.22 478220 29769924 10247404\n27.72\n10:15:01 PM 321188 32556004 99.02 475124 29721912 10234500\n27.68\n10:20:02 PM 441660 32435532 98.66 472336 29610476 10246288\n27.71\n10:25:01 PM 440636 32436556 98.66 472636 29634960 10219940\n27.64\n10:30:01 PM 469872 32407320 98.57 473016 29651476 10208520\n27.61\n10:35:01 PM 414540 32462652 98.74 473424 29672728 10223964\n27.65\n10:40:01 PM 354632 32522560 98.92 473772 29693016 10247752\n27.72\n10:45:01 PM 333708 32543484 98.98 474092 29720256 10227204\n27.66\n10:50:01 PM 528004 32349188 98.39 469396 29549832 10219536\n27.64\n10:55:02 PM 499068 32378124 98.48 469692 29587140 10204836\n27.60\n11:00:01 PM 462980 32414212 98.59 470032 29606764 10235820\n27.68\n11:05:01 PM 449540 32427652 98.63 470368 29626136 10209788\n27.61\n11:10:01 PM 419984 32457208 98.72 470772 29644248 10214480\n27.63\n11:15:01 PM 429900 32447292 98.69 471104 29664292 10202344\n27.59\n11:20:01 PM 394852 32482340 98.80 471528 29698052 10207604\n27.61\n11:25:01 PM 345328 32531864 98.95 471904 29717264 10215632\n27.63\n11:30:01 PM 368224 32508968 98.88 472236 29733544 10206468\n27.61\n11:35:01 PM 321800 32555392 99.02 472528 29758548 10211820\n27.62\n11:40:01 PM 282520 32594672 99.14 472860 29776952 10243516\n27.71\n\n12:00:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\nawait svctm %util\n09:30:01 PM dev253-5 66.29 146.33 483.33 9.50 6.27\n94.53 2.08 13.78\n09:35:01 PM dev253-5 154.80 126.85 1192.96 8.53 28.57\n184.59 1.45 22.43\n09:40:01 PM dev253-5 92.21 153.75 686.75 9.11 11.53\n125.00 1.87 17.21\n09:45:01 PM dev253-5 39.66 116.99 279.32 9.99 0.42\n10.66 2.61 10.36\n09:50:01 PM dev253-5 106.73 95.58 820.70 8.58 16.77\n157.12 1.68 17.88\n09:55:01 PM dev253-5 107.90 99.36 831.46 8.63 16.05\n148.76 1.71 18.42\n10:00:01 PM dev253-5 62.48 82.70 471.28 8.87 5.91\n94.52 2.10 13.11\n10:05:01 PM dev253-5 137.84 121.69 1064.03 8.60 24.48\n177.31 1.56 21.52\n10:10:01 PM dev253-5 107.93 104.16 827.83 8.64 16.69\n155.04 1.68 18.11\n10:15:01 PM dev253-5 40.55 126.12 277.57 9.96 0.41\n10.13 2.57 10.42\n10:20:02 PM dev253-5 104.33 136.77 793.49 8.92 16.97\n162.69 1.76 18.35\n10:25:01 PM dev253-5 108.04 115.36 825.26 8.71 16.68\n154.36 1.76 19.05\n10:30:01 PM dev253-5 69.72 105.66 523.05 9.02 7.45\n106.92 1.90 13.25\n10:35:01 PM dev253-5 101.58 91.59 781.85 8.60 15.00\n147.68 1.67 16.97\n10:40:01 PM dev253-5 107.50 97.91 827.17 8.61 17.68\n164.49 1.77 19.06\n10:45:01 PM dev253-5 69.98 140.13 519.57 9.43 7.09\n101.25 1.96 13.72\n10:50:01 PM dev253-5 104.30 83.31 806.12 8.53 16.18\n155.10 1.65 17.16\n10:55:02 PM dev253-5 106.86 209.65 790.27 9.36 15.59\n145.08 1.74 18.60\n11:00:01 PM dev253-5 50.42 92.08 371.52 9.19 3.05\n62.16 2.28 11.52\n11:05:01 PM dev253-5 101.06 88.31 776.57 8.56 15.12\n149.58 1.67 16.90\n11:10:01 PM dev253-5 103.08 77.73 798.23 8.50 17.14\n166.25 1.74 17.90\n11:15:01 PM dev253-5 57.74 96.45 428.62 9.09 5.23\n90.52 2.13 12.32\n11:20:01 PM dev253-5 97.73 185.18 727.38 9.34 14.64\n149.84 1.94 18.92\n11:25:01 PM dev253-5 95.03 85.52 730.31 8.58 14.42\n151.79 1.79 16.97\n11:30:01 PM dev253-5 53.76 73.65 404.47 8.89 3.94\n73.25 2.17 11.64\n11:35:01 PM dev253-5 110.37 125.05 842.17 8.76 16.96\n153.63 1.66 18.30\n11:40:01 PM dev253-5 103.93 87.00 801.59 8.55 16.01\n154.00 1.73 18.00\n\nAs you can see there is no high io activity in this period of time but db\nis frozen. My opinion that i have incorrect kernel setting and/or i have a\nmistake in postgresql.conf. Because there is not high activity on db. load\navg is about 1. When there is high traffic is about 1.15. This is from\nnagios monitoring system.\n\nBut sometimes load is about 4 and this time matches with sar %vmeff = 100%\nand database response time increase.\n\n-- \nС уважением Селявка Евгений\n\nPlease help with advice!Server HP ProLiant BL460c G1Architecture:          x86_64CPU op-mode(s):        32-bit, 64-bitByte Order:            Little EndianCPU(s):                8\r\nOn-line CPU(s) list:   0-7Thread(s) per core:    1Core(s) per socket:    4CPU socket(s):         2NUMA node(s):          1Vendor ID:             GenuineIntelCPU family:            6Model:                 23\r\nStepping:              6CPU MHz:               3000.105BogoMIPS:              6000.04Virtualization:        VT-xL1d cache:             32KL1i cache:             32KL2 cache:              6144KNUMA node0 CPU(s):     0-7\n32GB RAM[root@db3 ~]# numactl --hardwareavailable: 1 nodes (0)node 0 cpus: 0 1 2 3 4 5 6 7node 0 size: 32765 MBnode 0 free: 317 MBnode distances:node   0  0:  10\r\nRAID1 2x146GB 10k rpmCentOS release 6.3 (Final)Linux 2.6.32-279.11.1.el6.x86_64 #1 SMP x86_64 GNU/Linux kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736\r\nkernel.shmall = 4294967296vm.swappiness = 30vm.dirty_background_bytes = 67108864vm.dirty_bytes = 536870912PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit\nlisten_addresses = '*'port = 5433max_connections = 350shared_buffers = 8GBtemp_buffers = 64MBmax_prepared_transactions = 350work_mem = 256MBmaintenance_work_mem = 1GBmax_stack_depth = 4MB\r\nmax_files_per_process = 5000effective_io_concurrency = 2wal_level = hot_standbysynchronous_commit = offcheckpoint_segments = 64checkpoint_timeout = 15mincheckpoint_completion_target = 0.75max_wal_senders = 4\r\nwal_sender_delay = 100mswal_keep_segments = 128random_page_cost = 3.0effective_cache_size = 18GBautovacuum = onautovacuum_max_workers = 5autovacuum_vacuum_threshold = 900autovacuum_analyze_threshold = 350\r\nautovacuum_vacuum_scale_factor = 0.1autovacuum_analyze_scale_factor = 0.05log_min_duration_statement = 500deadlock_timeout = 1sDB size is about 20GB. There is no high write activity on DB. But periodically in postgresql log i see for example: \"select 1\" duration is about 500-1000 ms. \nIn this period of time response time from db terribly. This period of time not bound with high traffic. It is not other app on the server. There is not specific cron job on server. \nOur app written on java and use jdbc to connect to DB and internal pooling. There is about 100 connection to DB. This is sar output:\n12:00:01 AM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff09:30:01 PM     73.17    302.72 134790.16      0.00  46809.73      0.00      0.00      0.00      0.0009:35:01 PM     63.42    655.80 131740.74      0.00  46182.74      0.00      0.00      0.00      0.00\r\n09:40:01 PM     76.87    400.62 122375.34      0.00  42096.27      0.00      0.00      0.00      0.0009:45:01 PM     58.49    198.33 121922.86      0.00  42765.27      0.00      0.00      0.00      0.0009:50:01 PM     52.21    485.45 136775.65      0.15  49098.65      0.00      0.00      0.00      0.00\r\n09:55:01 PM     49.68    476.75 130159.24      0.00  45192.54      0.00      0.00      0.00      0.0010:00:01 PM     41.35    295.34 118655.80      0.00  40786.52      0.00      0.00      0.00      0.0010:05:01 PM     60.84    593.85 129890.83      0.00  44170.92      0.00      0.00      0.00      0.00\r\n10:10:01 PM     52.08    471.36 132773.63      0.00  46019.13      0.00      2.41      2.41    100.0010:15:01 PM     73.93    196.50 129384.21      0.33  45255.76     65.92      1.19     66.87     99.6410:20:02 PM     70.35    473.16 121940.38      0.11  44061.52     81.95     37.79    119.42     99.73\r\n10:25:01 PM     57.84    471.69 130583.33      0.01  46093.33      0.00      0.00      0.00      0.0010:30:01 PM     52.91    321.62 119264.34      0.01  41748.19      0.00      0.00      0.00      0.0010:35:01 PM     47.13    451.78 114625.62      0.02  40600.98      0.00      0.00      0.00      0.00\r\n10:40:01 PM     48.96    472.41 102352.79      0.00  35402.17      0.00      0.00      0.00      0.0010:45:01 PM     70.07    321.33 121423.02      0.00  43052.04      0.00      0.00      0.00      0.0010:50:01 PM     46.78    479.95 128938.09      0.02  37864.07    116.64     48.97    165.07     99.67\r\n10:55:02 PM    104.84    453.55 109189.98      0.00  37583.50      0.00      0.00      0.00      0.0011:00:01 PM     46.23    248.75 107313.26      0.00  37278.10      0.00      0.00      0.00      0.0011:05:01 PM     44.28    446.41 115598.61      0.01  40070.61      0.00      0.00      0.00      0.00\r\n11:10:01 PM     38.86    457.32 100240.71      0.00  34407.29      0.00      0.00      0.00      0.0011:15:01 PM     48.23    275.60 104780.84      0.00  36183.84      0.00      0.00      0.00      0.0011:20:01 PM     92.74    432.49 114698.74      0.01  40413.14      0.00      0.00      0.00      0.00\r\n11:25:01 PM     42.76    428.50  87769.28      0.00  29379.87      0.00      0.00      0.00      0.0011:30:01 PM     36.83    260.34  85072.46      0.00  28234.50      0.00      0.00      0.00      0.0011:35:01 PM     62.52    481.56  93150.67      0.00  31137.13      0.00      0.00      0.00      0.00\r\n11:40:01 PM     43.50    459.10  90407.34      0.00  30241.70      0.00      0.00      0.00      0.0012:00:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit\r\n09:30:01 PM    531792  32345400     98.38    475504  29583340  10211064     27.6209:35:01 PM    512096  32365096     98.44    475896  29608660  10200916     27.5909:40:01 PM    455584  32421608     98.61    476276  29638952  10211652     27.62\r\n09:45:01 PM    425744  32451448     98.71    476604  29662384  10206044     27.6009:50:01 PM    380960  32496232     98.84    477004  29684296  10243704     27.7109:55:01 PM    385644  32491548     98.83    477312  29706940  10204776     27.60\r\n10:00:01 PM    348604  32528588     98.94    477672  29725476  10228984     27.6710:05:01 PM    279216  32597976     99.15    478104  29751016  10281748     27.8110:10:01 PM    255168  32622024     99.22    478220  29769924  10247404     27.72\r\n10:15:01 PM    321188  32556004     99.02    475124  29721912  10234500     27.6810:20:02 PM    441660  32435532     98.66    472336  29610476  10246288     27.7110:25:01 PM    440636  32436556     98.66    472636  29634960  10219940     27.64\r\n10:30:01 PM    469872  32407320     98.57    473016  29651476  10208520     27.6110:35:01 PM    414540  32462652     98.74    473424  29672728  10223964     27.6510:40:01 PM    354632  32522560     98.92    473772  29693016  10247752     27.72\r\n10:45:01 PM    333708  32543484     98.98    474092  29720256  10227204     27.6610:50:01 PM    528004  32349188     98.39    469396  29549832  10219536     27.6410:55:02 PM    499068  32378124     98.48    469692  29587140  10204836     27.60\r\n11:00:01 PM    462980  32414212     98.59    470032  29606764  10235820     27.6811:05:01 PM    449540  32427652     98.63    470368  29626136  10209788     27.6111:10:01 PM    419984  32457208     98.72    470772  29644248  10214480     27.63\r\n11:15:01 PM    429900  32447292     98.69    471104  29664292  10202344     27.5911:20:01 PM    394852  32482340     98.80    471528  29698052  10207604     27.6111:25:01 PM    345328  32531864     98.95    471904  29717264  10215632     27.63\r\n11:30:01 PM    368224  32508968     98.88    472236  29733544  10206468     27.6111:35:01 PM    321800  32555392     99.02    472528  29758548  10211820     27.6211:40:01 PM    282520  32594672     99.14    472860  29776952  10243516     27.71\n12:00:01 AM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util09:30:01 PM  dev253-5     66.29    146.33    483.33      9.50      6.27     94.53      2.08     13.7809:35:01 PM  dev253-5    154.80    126.85   1192.96      8.53     28.57    184.59      1.45     22.43\r\n09:40:01 PM  dev253-5     92.21    153.75    686.75      9.11     11.53    125.00      1.87     17.2109:45:01 PM  dev253-5     39.66    116.99    279.32      9.99      0.42     10.66      2.61     10.3609:50:01 PM  dev253-5    106.73     95.58    820.70      8.58     16.77    157.12      1.68     17.88\r\n09:55:01 PM  dev253-5    107.90     99.36    831.46      8.63     16.05    148.76      1.71     18.4210:00:01 PM  dev253-5     62.48     82.70    471.28      8.87      5.91     94.52      2.10     13.1110:05:01 PM  dev253-5    137.84    121.69   1064.03      8.60     24.48    177.31      1.56     21.52\r\n10:10:01 PM  dev253-5    107.93    104.16    827.83      8.64     16.69    155.04      1.68     18.1110:15:01 PM  dev253-5     40.55    126.12    277.57      9.96      0.41     10.13      2.57     10.4210:20:02 PM  dev253-5    104.33    136.77    793.49      8.92     16.97    162.69      1.76     18.35\r\n10:25:01 PM  dev253-5    108.04    115.36    825.26      8.71     16.68    154.36      1.76     19.0510:30:01 PM  dev253-5     69.72    105.66    523.05      9.02      7.45    106.92      1.90     13.2510:35:01 PM  dev253-5    101.58     91.59    781.85      8.60     15.00    147.68      1.67     16.97\r\n10:40:01 PM  dev253-5    107.50     97.91    827.17      8.61     17.68    164.49      1.77     19.0610:45:01 PM  dev253-5     69.98    140.13    519.57      9.43      7.09    101.25      1.96     13.7210:50:01 PM  dev253-5    104.30     83.31    806.12      8.53     16.18    155.10      1.65     17.16\r\n10:55:02 PM  dev253-5    106.86    209.65    790.27      9.36     15.59    145.08      1.74     18.6011:00:01 PM  dev253-5     50.42     92.08    371.52      9.19      3.05     62.16      2.28     11.5211:05:01 PM  dev253-5    101.06     88.31    776.57      8.56     15.12    149.58      1.67     16.90\r\n11:10:01 PM  dev253-5    103.08     77.73    798.23      8.50     17.14    166.25      1.74     17.9011:15:01 PM  dev253-5     57.74     96.45    428.62      9.09      5.23     90.52      2.13     12.3211:20:01 PM  dev253-5     97.73    185.18    727.38      9.34     14.64    149.84      1.94     18.92\r\n11:25:01 PM  dev253-5     95.03     85.52    730.31      8.58     14.42    151.79      1.79     16.9711:30:01 PM  dev253-5     53.76     73.65    404.47      8.89      3.94     73.25      2.17     11.6411:35:01 PM  dev253-5    110.37    125.05    842.17      8.76     16.96    153.63      1.66     18.30\r\n11:40:01 PM  dev253-5    103.93     87.00    801.59      8.55     16.01    154.00      1.73     18.00As you can see there is no high io activity in this period of time but db is frozen. My opinion that i have incorrect kernel setting and/or i have a mistake in postgresql.conf. Because there is not high activity on db. load avg is about 1. When there is high traffic is about 1.15. This is from nagios monitoring system. \nBut sometimes load is about 4 and this time matches with sar %vmeff = 100% and database response time increase. \n-- С уважением Селявка Евгений", "msg_date": "Sat, 2 Nov 2013 22:54:02 +0400", "msg_from": "=?KOI8-R?B?5dfHxc7JyiDzxczR18vB?= <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql recommendation memory" }, { "msg_contents": "Hello desmodemone, i look again and again through my sar statistics and i\ndon't think that my db swapping in freeze time. For example:\n\nsar -B\n12:00:02 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s\npgscand/s pgsteal/s %vmeff\n09:40:01 PM 66.13 352.43 195070.33 0.00 70627.21\n0.00 0.00 0.00 0.00\n09:45:01 PM 54.55 526.87 190893.02 0.00 67850.63\n25.76 2.40 28.16 100.00\n09:50:01 PM 74.97 509.70 200564.75 0.00 71807.80\n0.00 0.00 0.00 0.00\n09:55:01 PM 81.54 443.57 186335.43 0.05 68486.83 127.33\n160.26 287.35 99.92\n10:00:01 PM 62.03 528.46 169701.41 0.00 60487.37 0.00\n15.62 15.62 100.00\n10:05:01 PM 64.61 504.76 178725.60 0.00 66251.26 0.00\n15.80 15.80 100.00\n10:10:01 PM 80.06 336.47 172819.14 0.00 62379.45\n0.00 0.00 0.00 0.00\n10:15:01 PM 59.69 512.85 180228.56 0.00 64091.90\n0.00 0.00 0.00 0.00\n\nsar -S\n12:00:02 AM kbswpfree kbswpused %swpused kbswpcad %swpcad\n09:40:01 PM 4095420 572 0.01 252 44.06\n09:45:01 PM 4095420 572 0.01 252 44.06\n09:50:01 PM 4095420 572 0.01 252 44.06\n09:55:01 PM 4095420 572 0.01 252 44.06\n10:00:01 PM 4095420 572 0.01 252 44.06\n10:05:01 PM 4095420 572 0.01 252 44.06\n10:10:01 PM 4095420 572 0.01 252 44.06\n10:15:01 PM 4095420 572 0.01 252 44.06\n\nIn thist time as you can see swap usage didn't change at all. And there is\ndedicated server for postgresql, there are no more app on this server, except\npacemaker+corosync for HA cluster. May be i read my sar statistics\nincorrect?\n\nI set work_mem to 1/4 from available RAM. I have 32Gb RAM so i set\nshared_buffers to 8Gb.\n\nNow i also set\n\nvm.dirty_bytes=67108864 this value equal my Smart Array E200i Cache Size.\nvm.dirty_background_bytes = 16777216 - 1/4 from vm.dirty_bytes\n\nNext step try to set correct values for:\nbgwriter_delay\nbgwriter_lru_maxpages\nbgwriter_lru_multiplier\n\n\nAnd one more fact, if i cleanup fs cache. sync && echo 1 >\n/proc/sys/vm/drop_caches. OS releases about 10-12Gb memory and freeze time\ncomes when fs cache comes again full. For example one or two day there is\nno freeze on DB.\n\n\n2013/11/4 desmodemone <[email protected]>\n\n> Hello,\n> I see your request on performance mailing list. I think your\n> server is swapping and because yoru swap is in the same RAID disk with all\n> (/ , database satastore etc ) you ncounter a freeze of system.\n>\n> I think you have to analyze why you are swapping. Are ther eonly\n> postgresql inside ? is it possible you are using too much work_mem memory ?\n>\n>\n> Have a nice day\n>\n>\n> 2013/11/2 Евгений Селявка <[email protected]>\n>\n>> Please help with advice!\n>>\n>> Server\n>> HP ProLiant BL460c G1\n>>\n>> Architecture: x86_64\n>> CPU op-mode(s): 32-bit, 64-bit\n>> Byte Order: Little Endian\n>> CPU(s): 8\n>> On-line CPU(s) list: 0-7\n>> Thread(s) per core: 1\n>> Core(s) per socket: 4\n>> CPU socket(s): 2\n>> NUMA node(s): 1\n>> Vendor ID: GenuineIntel\n>> CPU family: 6\n>> Model: 23\n>> Stepping: 6\n>> CPU MHz: 3000.105\n>> BogoMIPS: 6000.04\n>> Virtualization: VT-x\n>> L1d cache: 32K\n>> L1i cache: 32K\n>> L2 cache: 6144K\n>> NUMA node0 CPU(s): 0-7\n>>\n>> 32GB RAM\n>> [root@db3 ~]# numactl --hardware\n>> available: 1 nodes (0)\n>> node 0 cpus: 0 1 2 3 4 5 6 7\n>> node 0 size: 32765 MB\n>> node 0 free: 317 MB\n>> node distances:\n>> node 0\n>> 0: 10\n>>\n>>\n>> RAID1 2x146GB 10k rpm\n>>\n>> CentOS release 6.3 (Final)\n>> Linux 2.6.32-279.11.1.el6.x86_64 #1 SMP x86_64 GNU/Linux\n>>\n>>\n>> kernel.msgmnb = 65536\n>> kernel.msgmax = 65536\n>> kernel.shmmax = 68719476736\n>> kernel.shmall = 4294967296\n>> vm.swappiness = 30\n>> vm.dirty_background_bytes = 67108864\n>> vm.dirty_bytes = 536870912\n>>\n>>\n>> PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n>> 20120305 (Red Hat 4.4.6-4), 64-bit\n>>\n>> listen_addresses = '*'\n>> port = 5433\n>> max_connections = 350\n>> shared_buffers = 8GB\n>> temp_buffers = 64MB\n>> max_prepared_transactions = 350\n>> work_mem = 256MB\n>> maintenance_work_mem = 1GB\n>> max_stack_depth = 4MB\n>> max_files_per_process = 5000\n>> effective_io_concurrency = 2\n>> wal_level = hot_standby\n>> synchronous_commit = off\n>> checkpoint_segments = 64\n>> checkpoint_timeout = 15min\n>> checkpoint_completion_target = 0.75\n>> max_wal_senders = 4\n>> wal_sender_delay = 100ms\n>> wal_keep_segments = 128\n>> random_page_cost = 3.0\n>> effective_cache_size = 18GB\n>> autovacuum = on\n>> autovacuum_max_workers = 5\n>> autovacuum_vacuum_threshold = 900\n>> autovacuum_analyze_threshold = 350\n>> autovacuum_vacuum_scale_factor = 0.1\n>> autovacuum_analyze_scale_factor = 0.05\n>> log_min_duration_statement = 500\n>> deadlock_timeout = 1s\n>>\n>>\n>> DB size is about 20GB. There is no high write activity on DB. But\n>> periodically in postgresql log i see for example: \"select 1\" duration is\n>> about 500-1000 ms.\n>>\n>> In this period of time response time from db terribly. This period of\n>> time not bound with high traffic. It is not other app on the server.\n>> There is not specific cron job on server.\n>>\n>> Our app written on java and use jdbc to connect to DB and internal\n>> pooling. There is about 100 connection to DB. This is sar output:\n>>\n>> 12:00:01 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s\n>> pgscand/s pgsteal/s %vmeff\n>> 09:30:01 PM 73.17 302.72 134790.16 0.00 46809.73\n>> 0.00 0.00 0.00 0.00\n>> 09:35:01 PM 63.42 655.80 131740.74 0.00 46182.74\n>> 0.00 0.00 0.00 0.00\n>> 09:40:01 PM 76.87 400.62 122375.34 0.00 42096.27\n>> 0.00 0.00 0.00 0.00\n>> 09:45:01 PM 58.49 198.33 121922.86 0.00 42765.27\n>> 0.00 0.00 0.00 0.00\n>> 09:50:01 PM 52.21 485.45 136775.65 0.15 49098.65\n>> 0.00 0.00 0.00 0.00\n>> 09:55:01 PM 49.68 476.75 130159.24 0.00 45192.54\n>> 0.00 0.00 0.00 0.00\n>> 10:00:01 PM 41.35 295.34 118655.80 0.00 40786.52\n>> 0.00 0.00 0.00 0.00\n>> 10:05:01 PM 60.84 593.85 129890.83 0.00 44170.92\n>> 0.00 0.00 0.00 0.00\n>> 10:10:01 PM 52.08 471.36 132773.63 0.00 46019.13\n>> 0.00 2.41 2.41 100.00\n>> 10:15:01 PM 73.93 196.50 129384.21 0.33 45255.76\n>> 65.92 1.19 66.87 99.64\n>> 10:20:02 PM 70.35 473.16 121940.38 0.11 44061.52\n>> 81.95 37.79 119.42 99.73\n>> 10:25:01 PM 57.84 471.69 130583.33 0.01 46093.33\n>> 0.00 0.00 0.00 0.00\n>> 10:30:01 PM 52.91 321.62 119264.34 0.01 41748.19\n>> 0.00 0.00 0.00 0.00\n>> 10:35:01 PM 47.13 451.78 114625.62 0.02 40600.98\n>> 0.00 0.00 0.00 0.00\n>> 10:40:01 PM 48.96 472.41 102352.79 0.00 35402.17\n>> 0.00 0.00 0.00 0.00\n>> 10:45:01 PM 70.07 321.33 121423.02 0.00 43052.04\n>> 0.00 0.00 0.00 0.00\n>> 10:50:01 PM 46.78 479.95 128938.09 0.02 37864.07\n>> 116.64 48.97 165.07 99.67\n>> 10:55:02 PM 104.84 453.55 109189.98 0.00 37583.50\n>> 0.00 0.00 0.00 0.00\n>> 11:00:01 PM 46.23 248.75 107313.26 0.00 37278.10\n>> 0.00 0.00 0.00 0.00\n>> 11:05:01 PM 44.28 446.41 115598.61 0.01 40070.61\n>> 0.00 0.00 0.00 0.00\n>> 11:10:01 PM 38.86 457.32 100240.71 0.00 34407.29\n>> 0.00 0.00 0.00 0.00\n>> 11:15:01 PM 48.23 275.60 104780.84 0.00 36183.84\n>> 0.00 0.00 0.00 0.00\n>> 11:20:01 PM 92.74 432.49 114698.74 0.01 40413.14\n>> 0.00 0.00 0.00 0.00\n>> 11:25:01 PM 42.76 428.50 87769.28 0.00 29379.87\n>> 0.00 0.00 0.00 0.00\n>> 11:30:01 PM 36.83 260.34 85072.46 0.00 28234.50\n>> 0.00 0.00 0.00 0.00\n>> 11:35:01 PM 62.52 481.56 93150.67 0.00 31137.13\n>> 0.00 0.00 0.00 0.00\n>> 11:40:01 PM 43.50 459.10 90407.34 0.00 30241.70\n>> 0.00 0.00 0.00 0.00\n>>\n>> 12:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit\n>> %commit\n>> 09:30:01 PM 531792 32345400 98.38 475504 29583340\n>> 10211064 27.62\n>> 09:35:01 PM 512096 32365096 98.44 475896 29608660\n>> 10200916 27.59\n>> 09:40:01 PM 455584 32421608 98.61 476276 29638952\n>> 10211652 27.62\n>> 09:45:01 PM 425744 32451448 98.71 476604 29662384\n>> 10206044 27.60\n>> 09:50:01 PM 380960 32496232 98.84 477004 29684296\n>> 10243704 27.71\n>> 09:55:01 PM 385644 32491548 98.83 477312 29706940\n>> 10204776 27.60\n>> 10:00:01 PM 348604 32528588 98.94 477672 29725476\n>> 10228984 27.67\n>> 10:05:01 PM 279216 32597976 99.15 478104 29751016\n>> 10281748 27.81\n>> 10:10:01 PM 255168 32622024 99.22 478220 29769924\n>> 10247404 27.72\n>> 10:15:01 PM 321188 32556004 99.02 475124 29721912\n>> 10234500 27.68\n>> 10:20:02 PM 441660 32435532 98.66 472336 29610476\n>> 10246288 27.71\n>> 10:25:01 PM 440636 32436556 98.66 472636 29634960\n>> 10219940 27.64\n>> 10:30:01 PM 469872 32407320 98.57 473016 29651476\n>> 10208520 27.61\n>> 10:35:01 PM 414540 32462652 98.74 473424 29672728\n>> 10223964 27.65\n>> 10:40:01 PM 354632 32522560 98.92 473772 29693016\n>> 10247752 27.72\n>> 10:45:01 PM 333708 32543484 98.98 474092 29720256\n>> 10227204 27.66\n>> 10:50:01 PM 528004 32349188 98.39 469396 29549832\n>> 10219536 27.64\n>> 10:55:02 PM 499068 32378124 98.48 469692 29587140\n>> 10204836 27.60\n>> 11:00:01 PM 462980 32414212 98.59 470032 29606764\n>> 10235820 27.68\n>> 11:05:01 PM 449540 32427652 98.63 470368 29626136\n>> 10209788 27.61\n>> 11:10:01 PM 419984 32457208 98.72 470772 29644248\n>> 10214480 27.63\n>> 11:15:01 PM 429900 32447292 98.69 471104 29664292\n>> 10202344 27.59\n>> 11:20:01 PM 394852 32482340 98.80 471528 29698052\n>> 10207604 27.61\n>> 11:25:01 PM 345328 32531864 98.95 471904 29717264\n>> 10215632 27.63\n>> 11:30:01 PM 368224 32508968 98.88 472236 29733544\n>> 10206468 27.61\n>> 11:35:01 PM 321800 32555392 99.02 472528 29758548\n>> 10211820 27.62\n>> 11:40:01 PM 282520 32594672 99.14 472860 29776952\n>> 10243516 27.71\n>>\n>> 12:00:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz\n>> avgqu-sz await svctm %util\n>> 09:30:01 PM dev253-5 66.29 146.33 483.33 9.50\n>> 6.27 94.53 2.08 13.78\n>> 09:35:01 PM dev253-5 154.80 126.85 1192.96 8.53\n>> 28.57 184.59 1.45 22.43\n>> 09:40:01 PM dev253-5 92.21 153.75 686.75 9.11\n>> 11.53 125.00 1.87 17.21\n>> 09:45:01 PM dev253-5 39.66 116.99 279.32 9.99\n>> 0.42 10.66 2.61 10.36\n>> 09:50:01 PM dev253-5 106.73 95.58 820.70 8.58\n>> 16.77 157.12 1.68 17.88\n>> 09:55:01 PM dev253-5 107.90 99.36 831.46 8.63\n>> 16.05 148.76 1.71 18.42\n>> 10:00:01 PM dev253-5 62.48 82.70 471.28 8.87\n>> 5.91 94.52 2.10 13.11\n>> 10:05:01 PM dev253-5 137.84 121.69 1064.03 8.60\n>> 24.48 177.31 1.56 21.52\n>> 10:10:01 PM dev253-5 107.93 104.16 827.83 8.64 16.69\n>> 155.04 1.68 18.11\n>> 10:15:01 PM dev253-5 40.55 126.12 277.57 9.96\n>> 0.41 10.13 2.57 10.42\n>> 10:20:02 PM dev253-5 104.33 136.77 793.49 8.92\n>> 16.97 162.69 1.76 18.35\n>> 10:25:01 PM dev253-5 108.04 115.36 825.26 8.71 16.68\n>> 154.36 1.76 19.05\n>> 10:30:01 PM dev253-5 69.72 105.66 523.05 9.02\n>> 7.45 106.92 1.90 13.25\n>> 10:35:01 PM dev253-5 101.58 91.59 781.85 8.60\n>> 15.00 147.68 1.67 16.97\n>> 10:40:01 PM dev253-5 107.50 97.91 827.17 8.61\n>> 17.68 164.49 1.77 19.06\n>> 10:45:01 PM dev253-5 69.98 140.13 519.57 9.43\n>> 7.09 101.25 1.96 13.72\n>> 10:50:01 PM dev253-5 104.30 83.31 806.12 8.53\n>> 16.18 155.10 1.65 17.16\n>> 10:55:02 PM dev253-5 106.86 209.65 790.27 9.36\n>> 15.59 145.08 1.74 18.60\n>> 11:00:01 PM dev253-5 50.42 92.08 371.52 9.19\n>> 3.05 62.16 2.28 11.52\n>> 11:05:01 PM dev253-5 101.06 88.31 776.57 8.56\n>> 15.12 149.58 1.67 16.90\n>> 11:10:01 PM dev253-5 103.08 77.73 798.23 8.50\n>> 17.14 166.25 1.74 17.90\n>> 11:15:01 PM dev253-5 57.74 96.45 428.62 9.09\n>> 5.23 90.52 2.13 12.32\n>> 11:20:01 PM dev253-5 97.73 185.18 727.38 9.34\n>> 14.64 149.84 1.94 18.92\n>> 11:25:01 PM dev253-5 95.03 85.52 730.31 8.58 14.42\n>> 151.79 1.79 16.97\n>> 11:30:01 PM dev253-5 53.76 73.65 404.47 8.89\n>> 3.94 73.25 2.17 11.64\n>> 11:35:01 PM dev253-5 110.37 125.05 842.17 8.76\n>> 16.96 153.63 1.66 18.30\n>> 11:40:01 PM dev253-5 103.93 87.00 801.59 8.55\n>> 16.01 154.00 1.73 18.00\n>>\n>> As you can see there is no high io activity in this period of time but db\n>> is frozen. My opinion that i have incorrect kernel setting and/or i have\n>> a mistake in postgresql.conf. Because there is not high activity on db.\n>> load avg is about 1. When there is high traffic is about 1.15. This is from\n>> nagios monitoring system.\n>>\n>> But sometimes load is about 4 and this time matches with sar %vmeff =\n>> 100% and database response time increase.\n>>\n>> --\n>> С уважением Селявка Евгений\n>>\n>\n>\n\n\n-- \nС уважением Селявка Евгений\n\nHello desmodemone, i look again and again through my sar statistics and i don't think that my db swapping in freeze time. For example:sar -B12:00:02 AM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff\n09:40:01 PM     66.13    352.43 195070.33      0.00  70627.21      0.00      0.00      0.00      0.0009:45:01 PM     54.55    526.87 190893.02      0.00  67850.63     25.76      2.40     28.16    100.0009:50:01 PM     74.97    509.70 200564.75      0.00  71807.80      0.00      0.00      0.00      0.00\n09:55:01 PM     81.54    443.57 186335.43      0.05  68486.83    127.33    160.26    287.35     99.9210:00:01 PM     62.03    528.46 169701.41      0.00  60487.37      0.00     15.62     15.62    100.0010:05:01 PM     64.61    504.76 178725.60      0.00  66251.26      0.00     15.80     15.80    100.00\n10:10:01 PM     80.06    336.47 172819.14      0.00  62379.45      0.00      0.00      0.00      0.0010:15:01 PM     59.69    512.85 180228.56      0.00  64091.90      0.00      0.00      0.00      0.00sar -S \n12:00:02 AM kbswpfree kbswpused  %swpused  kbswpcad   %swpcad09:40:01 PM   4095420       572      0.01       252     44.0609:45:01 PM   4095420       572      0.01       252     44.0609:50:01 PM   4095420       572      0.01       252     44.06\n09:55:01 PM   4095420       572      0.01       252     44.0610:00:01 PM   4095420       572      0.01       252     44.0610:05:01 PM   4095420       572      0.01       252     44.0610:10:01 PM   4095420       572      0.01       252     44.06\n10:15:01 PM   4095420       572      0.01       252     44.06In thist time as you can see swap usage didn't change at all. And there is dedicated server for postgresql, there are no more app on this server, except pacemaker+corosync for HA cluster. May be i read my sar statistics incorrect?\nI set work_mem to 1/4 from available RAM. I have 32Gb RAM so i set shared_buffers to 8Gb. Now i also set \nvm.dirty_bytes=67108864 this value equal my Smart Array E200i Cache Size. vm.dirty_background_bytes = 16777216 - 1/4 from vm.dirty_bytes\nNext step try to set correct values for:bgwriter_delaybgwriter_lru_maxpagesbgwriter_lru_multiplierAnd one more fact, if i cleanup fs cache. sync && echo 1 > /proc/sys/vm/drop_caches. OS releases about 10-12Gb memory and freeze time comes when fs cache comes again full. For example  one or two day there is no freeze on DB.\n2013/11/4 desmodemone <[email protected]>\nHello,             I see your request on performance mailing list. I think your server is swapping  and because yoru swap is in the same RAID disk with all (/ , database satastore etc ) you ncounter a freeze of system.\nI think you have to analyze why you are swapping. Are ther eonly postgresql inside ? is it possible you are using too much work_mem memory ?Have a nice day\n\n2013/11/2 Евгений Селявка <[email protected]>\nPlease help with advice!Server HP ProLiant BL460c G1Architecture:          x86_64CPU op-mode(s):        32-bit, 64-bitByte Order:            Little EndianCPU(s):                8\n\n\nOn-line CPU(s) list:   0-7Thread(s) per core:    1Core(s) per socket:    4CPU socket(s):         2NUMA node(s):          1Vendor ID:             GenuineIntelCPU family:            6Model:                 23\n\n\nStepping:              6CPU MHz:               3000.105BogoMIPS:              6000.04Virtualization:        VT-xL1d cache:             32KL1i cache:             32KL2 cache:              6144K\n\nNUMA node0 CPU(s):     0-7\n32GB RAM[root@db3 ~]# numactl --hardwareavailable: 1 nodes (0)node 0 cpus: 0 1 2 3 4 5 6 7node 0 size: 32765 MB\n\nnode 0 free: 317 MBnode distances:node   0  0:  10\nRAID1 2x146GB 10k rpmCentOS release 6.3 (Final)Linux 2.6.32-279.11.1.el6.x86_64 #1 SMP x86_64 GNU/Linux kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736\n\n\nkernel.shmall = 4294967296vm.swappiness = 30vm.dirty_background_bytes = 67108864vm.dirty_bytes = 536870912PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit\nlisten_addresses = '*'port = 5433max_connections = 350shared_buffers = 8GBtemp_buffers = 64MBmax_prepared_transactions = 350work_mem = 256MBmaintenance_work_mem = 1GBmax_stack_depth = 4MB\n\n\nmax_files_per_process = 5000effective_io_concurrency = 2wal_level = hot_standbysynchronous_commit = offcheckpoint_segments = 64checkpoint_timeout = 15mincheckpoint_completion_target = 0.75max_wal_senders = 4\n\n\nwal_sender_delay = 100mswal_keep_segments = 128random_page_cost = 3.0effective_cache_size = 18GBautovacuum = onautovacuum_max_workers = 5autovacuum_vacuum_threshold = 900autovacuum_analyze_threshold = 350\n\n\nautovacuum_vacuum_scale_factor = 0.1autovacuum_analyze_scale_factor = 0.05log_min_duration_statement = 500deadlock_timeout = 1sDB size is about 20GB. There is no high write activity on DB. But periodically in postgresql log i see for example: \"select 1\" duration is about 500-1000 ms. \nIn this period of time response time from db terribly. This period of time not bound with high traffic. It is not other app on the server. There is not specific cron job on server. \nOur app written on java and use jdbc to connect to DB and internal pooling. There is about 100 connection to DB. This is sar output:\n12:00:01 AM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff09:30:01 PM     73.17    302.72 134790.16      0.00  46809.73      0.00      0.00      0.00      0.0009:35:01 PM     63.42    655.80 131740.74      0.00  46182.74      0.00      0.00      0.00      0.00\n\n\n09:40:01 PM     76.87    400.62 122375.34      0.00  42096.27      0.00      0.00      0.00      0.0009:45:01 PM     58.49    198.33 121922.86      0.00  42765.27      0.00      0.00      0.00      0.0009:50:01 PM     52.21    485.45 136775.65      0.15  49098.65      0.00      0.00      0.00      0.00\n\n\n09:55:01 PM     49.68    476.75 130159.24      0.00  45192.54      0.00      0.00      0.00      0.0010:00:01 PM     41.35    295.34 118655.80      0.00  40786.52      0.00      0.00      0.00      0.0010:05:01 PM     60.84    593.85 129890.83      0.00  44170.92      0.00      0.00      0.00      0.00\n\n\n10:10:01 PM     52.08    471.36 132773.63      0.00  46019.13      0.00      2.41      2.41    100.0010:15:01 PM     73.93    196.50 129384.21      0.33  45255.76     65.92      1.19     66.87     99.6410:20:02 PM     70.35    473.16 121940.38      0.11  44061.52     81.95     37.79    119.42     99.73\n\n\n10:25:01 PM     57.84    471.69 130583.33      0.01  46093.33      0.00      0.00      0.00      0.0010:30:01 PM     52.91    321.62 119264.34      0.01  41748.19      0.00      0.00      0.00      0.0010:35:01 PM     47.13    451.78 114625.62      0.02  40600.98      0.00      0.00      0.00      0.00\n\n\n10:40:01 PM     48.96    472.41 102352.79      0.00  35402.17      0.00      0.00      0.00      0.0010:45:01 PM     70.07    321.33 121423.02      0.00  43052.04      0.00      0.00      0.00      0.0010:50:01 PM     46.78    479.95 128938.09      0.02  37864.07    116.64     48.97    165.07     99.67\n\n\n10:55:02 PM    104.84    453.55 109189.98      0.00  37583.50      0.00      0.00      0.00      0.0011:00:01 PM     46.23    248.75 107313.26      0.00  37278.10      0.00      0.00      0.00      0.0011:05:01 PM     44.28    446.41 115598.61      0.01  40070.61      0.00      0.00      0.00      0.00\n\n\n11:10:01 PM     38.86    457.32 100240.71      0.00  34407.29      0.00      0.00      0.00      0.0011:15:01 PM     48.23    275.60 104780.84      0.00  36183.84      0.00      0.00      0.00      0.0011:20:01 PM     92.74    432.49 114698.74      0.01  40413.14      0.00      0.00      0.00      0.00\n\n\n11:25:01 PM     42.76    428.50  87769.28      0.00  29379.87      0.00      0.00      0.00      0.0011:30:01 PM     36.83    260.34  85072.46      0.00  28234.50      0.00      0.00      0.00      0.00\n\n11:35:01 PM     62.52    481.56  93150.67      0.00  31137.13      0.00      0.00      0.00      0.00\n11:40:01 PM     43.50    459.10  90407.34      0.00  30241.70      0.00      0.00      0.00      0.0012:00:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit\n\n\n09:30:01 PM    531792  32345400     98.38    475504  29583340  10211064     27.6209:35:01 PM    512096  32365096     98.44    475896  29608660  10200916     27.5909:40:01 PM    455584  32421608     98.61    476276  29638952  10211652     27.62\n\n\n09:45:01 PM    425744  32451448     98.71    476604  29662384  10206044     27.6009:50:01 PM    380960  32496232     98.84    477004  29684296  10243704     27.7109:55:01 PM    385644  32491548     98.83    477312  29706940  10204776     27.60\n\n\n10:00:01 PM    348604  32528588     98.94    477672  29725476  10228984     27.6710:05:01 PM    279216  32597976     99.15    478104  29751016  10281748     27.8110:10:01 PM    255168  32622024     99.22    478220  29769924  10247404     27.72\n\n\n10:15:01 PM    321188  32556004     99.02    475124  29721912  10234500     27.6810:20:02 PM    441660  32435532     98.66    472336  29610476  10246288     27.7110:25:01 PM    440636  32436556     98.66    472636  29634960  10219940     27.64\n\n\n10:30:01 PM    469872  32407320     98.57    473016  29651476  10208520     27.6110:35:01 PM    414540  32462652     98.74    473424  29672728  10223964     27.6510:40:01 PM    354632  32522560     98.92    473772  29693016  10247752     27.72\n\n\n10:45:01 PM    333708  32543484     98.98    474092  29720256  10227204     27.6610:50:01 PM    528004  32349188     98.39    469396  29549832  10219536     27.6410:55:02 PM    499068  32378124     98.48    469692  29587140  10204836     27.60\n\n\n11:00:01 PM    462980  32414212     98.59    470032  29606764  10235820     27.6811:05:01 PM    449540  32427652     98.63    470368  29626136  10209788     27.6111:10:01 PM    419984  32457208     98.72    470772  29644248  10214480     27.63\n\n\n11:15:01 PM    429900  32447292     98.69    471104  29664292  10202344     27.5911:20:01 PM    394852  32482340     98.80    471528  29698052  10207604     27.6111:25:01 PM    345328  32531864     98.95    471904  29717264  10215632     27.63\n\n\n11:30:01 PM    368224  32508968     98.88    472236  29733544  10206468     27.6111:35:01 PM    321800  32555392     99.02    472528  29758548  10211820     27.6211:40:01 PM    282520  32594672     99.14    472860  29776952  10243516     27.71\n12:00:01 AM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util09:30:01 PM  dev253-5     66.29    146.33    483.33      9.50      6.27     94.53      2.08     13.7809:35:01 PM  dev253-5    154.80    126.85   1192.96      8.53     28.57    184.59      1.45     22.43\n\n\n09:40:01 PM  dev253-5     92.21    153.75    686.75      9.11     11.53    125.00      1.87     17.2109:45:01 PM  dev253-5     39.66    116.99    279.32      9.99      0.42     10.66      2.61     10.3609:50:01 PM  dev253-5    106.73     95.58    820.70      8.58     16.77    157.12      1.68     17.88\n\n\n09:55:01 PM  dev253-5    107.90     99.36    831.46      8.63     16.05    148.76      1.71     18.4210:00:01 PM  dev253-5     62.48     82.70    471.28      8.87      5.91     94.52      2.10     13.1110:05:01 PM  dev253-5    137.84    121.69   1064.03      8.60     24.48    177.31      1.56     21.52\n\n\n10:10:01 PM  dev253-5    107.93    104.16    827.83      8.64     16.69    155.04      1.68     18.1110:15:01 PM  dev253-5     40.55    126.12    277.57      9.96      0.41     10.13      2.57     10.42\n\n10:20:02 PM  dev253-5    104.33    136.77    793.49      8.92     16.97    162.69      1.76     18.35\n10:25:01 PM  dev253-5    108.04    115.36    825.26      8.71     16.68    154.36      1.76     19.0510:30:01 PM  dev253-5     69.72    105.66    523.05      9.02      7.45    106.92      1.90     13.25\n\n10:35:01 PM  dev253-5    101.58     91.59    781.85      8.60     15.00    147.68      1.67     16.97\n10:40:01 PM  dev253-5    107.50     97.91    827.17      8.61     17.68    164.49      1.77     19.0610:45:01 PM  dev253-5     69.98    140.13    519.57      9.43      7.09    101.25      1.96     13.7210:50:01 PM  dev253-5    104.30     83.31    806.12      8.53     16.18    155.10      1.65     17.16\n\n\n10:55:02 PM  dev253-5    106.86    209.65    790.27      9.36     15.59    145.08      1.74     18.6011:00:01 PM  dev253-5     50.42     92.08    371.52      9.19      3.05     62.16      2.28     11.5211:05:01 PM  dev253-5    101.06     88.31    776.57      8.56     15.12    149.58      1.67     16.90\n\n\n11:10:01 PM  dev253-5    103.08     77.73    798.23      8.50     17.14    166.25      1.74     17.9011:15:01 PM  dev253-5     57.74     96.45    428.62      9.09      5.23     90.52      2.13     12.3211:20:01 PM  dev253-5     97.73    185.18    727.38      9.34     14.64    149.84      1.94     18.92\n\n\n11:25:01 PM  dev253-5     95.03     85.52    730.31      8.58     14.42    151.79      1.79     16.9711:30:01 PM  dev253-5     53.76     73.65    404.47      8.89      3.94     73.25      2.17     11.64\n\n11:35:01 PM  dev253-5    110.37    125.05    842.17      8.76     16.96    153.63      1.66     18.30\n11:40:01 PM  dev253-5    103.93     87.00    801.59      8.55     16.01    154.00      1.73     18.00As you can see there is no high io activity in this period of time but db is frozen. My opinion that i have incorrect kernel setting and/or i have a mistake in postgresql.conf. Because there is not high activity on db. load avg is about 1. When there is high traffic is about 1.15. This is from nagios monitoring system. \nBut sometimes load is about 4 and this time matches with sar %vmeff = 100% and database response time increase. \n\n-- С уважением Селявка Евгений\n\n\n-- С уважением Селявка Евгений", "msg_date": "Tue, 5 Nov 2013 12:37:51 +0400", "msg_from": "=?KOI8-R?B?5dfHxc7JyiDzxczR18vB?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "\n> PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n> 20120305 (Red Hat 4.4.6-4), 64-bit\n\nFirst, you should be using the latest update version. You are currently\nmissing multiple patch updates.\n\n> listen_addresses = '*'\n> port = 5433\n> max_connections = 350\n> shared_buffers = 8GB\n\nTry dropping shared_buffers to 2GB. We've seen some issues on certain\nsystems with 8GB shared buffers.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 05 Nov 2013 11:59:57 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Tue, Nov 5, 2013 at 8:37 AM, Евгений Селявка <[email protected]> wrote:\n> I set work_mem to 1/4 from available RAM. I have 32Gb RAM so i set\n> shared_buffers to 8Gb.\nI am sure you are mentioning shared_buffers here and not work_mem.\nwork_mem is a per-operation parameter. So if you are using an\noperation involving work_mem more than 4 times simultaneously on\ndifferent sessions you'll swap pretty quickly ;)\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Nov 2013 23:00:14 +0000", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Sat, Nov 2, 2013 at 12:54 PM, Евгений Селявка <[email protected]> wrote:\n\nSNIP\n\n> max_connections = 350\nSNIP\n> work_mem = 256MB\n\nThese two settings together are quite dangerous.\n\n1: Look into a db pooler to get your connections needed down to no\nmore than 2x # of cores in your machine. I recommend pgbouncer\n2: Your current settings mean that if you max out connections and each\nof those connections does a large sort at the same time, they'll try\nto allocated 256MB*250 or 89,600MB. If you run one job that can use\nthat much work_mem, then set it by connection or user for that one\nconnection or user only. Allowing any process to allocate 256MB is\nusually a bad idea, and doubly so if you allow 350 incoming\nconnections. Dropping work_mem to 16MB means a ceiling of about 5G\nmemory if you get swamped and each query is averaging 1 sort. Note\nthat a single query CAN run > 1 sort, so it's not a hard limit and you\ncould still swamp your machine, but it's less likely.\n3: Turn off the OOM killer. On a production DB it is unwelcome and\ncould cause some serious issues.\n4: vm.swappiness = 0 is my normal setting. I also tend to just turn\noff swap on big memory linux boxes because linux cirtual memory is\noften counterproductive on db servers. Some people might even say it\nis broken, I tend to agree. Better to have a process fail to allocate\nmemory and report it in logs than have a machine slow to a crawl under\nload. But that's your choice. And 64G isn't that big, so you're in the\nin between zone for me on whether to just turn off swap.\n5: turn down shared_buffers to 1 or 2G.\n6: lower all your vm dirty ratio / size settings so that the machine\nnever has to write a lot at one time.\n\nBasically don't TRY to allocate all the memory, try to leave 75% or so\nfree for the OS to allocate as buffers. After getting a baseline for\nperformance under load then make bigger changes.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Nov 2013 16:22:43 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "Thank you for advice.\n\n1)\nFirst off all, we use java app with jdbc driver wich can pool connection,\nthats why i don't think that this is good decision to put one more\npooler between\napp and DB. May be someone have an experience with pgbouncer and jdbc and\ncould give a good advice with advantage and disadvantage of this\narchitecture.\n\n2) Yes this is my error in configuration and every day or two i decrease\nwork_mem and monitor for my system and postgresql log try to find record\nabout temp files. I will decrease work_mem to 16MB or may be 32MB. But\nreally i repeat that i have about 100 concurrent connections to my DB. I\nset this value with big reserve. I can't change this setting because db in\nproduction.\n\n3)\nI also read about disabling OOM killer but when i set\nvm.overcommit_memory=2. My DB work couple of a day and then pacemaker stop\nit because i set wrong value for vm.overcommit_ratio i set it to 90. And\nwhen pacemaker try to execute psql -c 'select 1' postmaster return 'out of\nmemory' and pacemaker stop my production DB. I need to know what is the\ncorrect value for vm.overcommit_ratio or how postmaster allocate memory\nwhen fork may be formula or something? If i get answers on this question i\ncan pick up vm.overcommit_ratio.\n\n4)\nAbout vm.swappiness totally agree and i turn it on for experiment goals,\nbecause i have problem and my db freeze. I play with different kernel\nsetting try to pick up correct value. In the beginning i set it to 0 and\nall works fine.\n\n5)\nI will be planing downtime and decrease max_connection and shared_buffers.\n\n6) I set to this values\nvm.dirty_bytes=67108864 this value equal my Smart Array E200i Cache Size.\nvm.dirty_background_bytes = 16777216 - 1/4 from vm.dirty_bytes\n\n\n<Basically don't TRY to allocate all the memory, try to leave 75% or so\n<free for the OS to allocate as buffers. After getting a baseline for\n<performance under load then make bigger changes\n\nThis means that i should set effective_cache_size to 75% of my RAM?\n\n\n\n2013/11/6 Scott Marlowe <[email protected]>\n\n> On Sat, Nov 2, 2013 at 12:54 PM, Евгений Селявка <[email protected]>\n> wrote:\n>\n> SNIP\n>\n> > max_connections = 350\n> SNIP\n> > work_mem = 256MB\n>\n> These two settings together are quite dangerous.\n>\n> 1: Look into a db pooler to get your connections needed down to no\n> more than 2x # of cores in your machine. I recommend pgbouncer\n> 2: Your current settings mean that if you max out connections and each\n> of those connections does a large sort at the same time, they'll try\n> to allocated 256MB*250 or 89,600MB. If you run one job that can use\n> that much work_mem, then set it by connection or user for that one\n> connection or user only. Allowing any process to allocate 256MB is\n> usually a bad idea, and doubly so if you allow 350 incoming\n> connections. Dropping work_mem to 16MB means a ceiling of about 5G\n> memory if you get swamped and each query is averaging 1 sort. Note\n> that a single query CAN run > 1 sort, so it's not a hard limit and you\n> could still swamp your machine, but it's less likely.\n> 3: Turn off the OOM killer. On a production DB it is unwelcome and\n> could cause some serious issues.\n> 4: vm.swappiness = 0 is my normal setting. I also tend to just turn\n> off swap on big memory linux boxes because linux cirtual memory is\n> often counterproductive on db servers. Some people might even say it\n> is broken, I tend to agree. Better to have a process fail to allocate\n> memory and report it in logs than have a machine slow to a crawl under\n> load. But that's your choice. And 64G isn't that big, so you're in the\n> in between zone for me on whether to just turn off swap.\n> 5: turn down shared_buffers to 1 or 2G.\n> 6: lower all your vm dirty ratio / size settings so that the machine\n> never has to write a lot at one time.\n>\n> Basically don't TRY to allocate all the memory, try to leave 75% or so\n> free for the OS to allocate as buffers. After getting a baseline for\n> performance under load then make bigger changes.\n>\n\n\n\n-- \nС уважением Селявка Евгений\n\nThank you for advice.1)First off all, we use java app with jdbc driver wich can pool connection, thats why i don't think that this is good decision to put one more pooler between app and DB. May be someone have an experience with  pgbouncer and jdbc and could give a good advice with advantage and disadvantage of this architecture.\n2) Yes this is my error in configuration and every day or two i decrease work_mem and monitor for my system and postgresql log try to find record about temp files. I will decrease work_mem to 16MB or may be 32MB. But really i repeat that i have about 100 concurrent connections to my DB. I set this value with big reserve. I can't change this setting because db in production. \n3)I also read about disabling OOM killer but when i set vm.overcommit_memory=2. My DB work couple of a day and then pacemaker stop it because i set wrong value for vm.overcommit_ratio i set it to 90. And when pacemaker try to execute psql -c 'select 1' postmaster return 'out of memory' and pacemaker stop my production DB. I need to know what is the correct value for vm.overcommit_ratio or how postmaster allocate memory when fork may be formula or something? If i get answers on this question i can pick up vm.overcommit_ratio.\n4)About vm.swappiness totally agree and i turn it on for experiment goals, because i have problem and my db freeze. I play with different kernel setting try to pick up correct value. In the beginning  i set it to 0 and all works fine.\n5)I will be planing downtime and decrease max_connection and shared_buffers. 6) I set to this valuesvm.dirty_bytes=67108864 this value equal my Smart Array E200i Cache Size. \nvm.dirty_background_bytes = 16777216 - 1/4 from vm.dirty_bytes\n<Basically don't TRY to allocate all the memory, try to leave 75% or so\n<free for the OS to allocate as buffers. After getting a baseline for\n<performance under load then make bigger changesThis means that i should set effective_cache_size to 75% of my RAM?\n2013/11/6 Scott Marlowe <[email protected]>\nOn Sat, Nov 2, 2013 at 12:54 PM, Евгений Селявка <[email protected]> wrote:\n\nSNIP\n\n> max_connections = 350\nSNIP\n> work_mem = 256MB\n\nThese two settings together are quite dangerous.\n\n1: Look into a db pooler to get your connections needed down to no\nmore than 2x # of cores in your machine. I recommend pgbouncer\n2: Your current settings mean that if you max out connections and each\nof those connections does a large sort at the same time, they'll try\nto allocated 256MB*250 or 89,600MB. If you run one job that can use\nthat much work_mem, then set it by connection or user for that one\nconnection or user only. Allowing any process to allocate 256MB is\nusually a bad idea, and doubly so if you allow 350 incoming\nconnections.  Dropping work_mem to 16MB means a ceiling of about 5G\nmemory if you get swamped and each query is averaging 1 sort. Note\nthat a single query CAN run > 1 sort, so it's not a hard limit and you\ncould still swamp your machine, but it's less likely.\n3: Turn off the OOM killer. On a production DB it is unwelcome and\ncould cause some serious issues.\n4: vm.swappiness = 0 is my normal setting. I also tend to just turn\noff swap on big memory linux boxes because linux cirtual memory is\noften counterproductive on db servers. Some people might even say it\nis broken, I tend to agree. Better to have a process fail to allocate\nmemory and report it in logs than have a machine slow to a crawl under\nload. But that's your choice. And 64G isn't that big, so you're in the\nin between zone for me on whether to just turn off swap.\n5: turn down shared_buffers to 1 or 2G.\n6: lower all your vm dirty ratio / size settings so that the machine\nnever has to write a lot at one time.\n\nBasically don't TRY to allocate all the memory, try to leave 75% or so\nfree for the OS to allocate as buffers. After getting a baseline for\nperformance under load then make bigger changes.\n-- С уважением Селявка Евгений", "msg_date": "Wed, 6 Nov 2013 12:53:24 +0400", "msg_from": "=?KOI8-R?B?5dfHxc7JyiDzxczR18vB?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Wed, Nov 6, 2013 at 1:53 AM, Евгений Селявка <[email protected]> wrote:\n> Thank you for advice.\n>\n> 1)\n> First off all, we use java app with jdbc driver wich can pool connection,\n> thats why i don't think that this is good decision to put one more pooler\n> between app and DB. May be someone have an experience with pgbouncer and\n> jdbc and could give a good advice with advantage and disadvantage of this\n> architecture.\n\nThat's a mostly religious argument. I.e. you're going on feeling here\nthat pooling in jdbc alone is better than either jdbc/pgbouncer or\nplain pgbouncer alone. My experience is that jdbc pooling is not in\nthe same category as pgbouncer for configuration and performance.\nEither way, get that connection count down to something reasonable.\n\nIf you've routinely got 200+ connections you've got too many. Again,\n2x cores is about max on most machines for maximum throughput. 3x to\n4x is the absolute max really. Unless you've got a machine with 40+\ncores, you should be running a lot fewer connections.\n\nBasically pgbouncer is veyr lightweight and can take thousands of\nincoming connections and balance them into a few dozen connections to\nthe database.\n\n> 2) Yes this is my error in configuration and every day or two i decrease\n> work_mem and monitor for my system and postgresql log try to find record\n> about temp files. I will decrease work_mem to 16MB or may be 32MB. But\n> really i repeat that i have about 100 concurrent connections to my DB. I set\n> this value with big reserve. I can't change this setting because db in\n> production.\n\nIf you've got one job that needs lots of mem and lot of jobs that\ndon't, look at my recommendation to lower work_mem for all the low mem\nrequiring jobs. If you can split those heavy lifting jobs out to\nanother user, then you can use a pooler like pgbouncer to do admission\ncontrol by limiting that heavy lifter to a few connections at a time.\nThe rest will wait in line behind it.\n\n> 3)\n> I also read about disabling OOM killer but when i set\n> vm.overcommit_memory=2. My DB work couple of a day and then pacemaker stop\n> it because i set wrong value for vm.overcommit_ratio i set it to 90. And\n> when pacemaker try to execute psql -c 'select 1' postmaster return 'out of\n> memory' and pacemaker stop my production DB. I need to know what is the\n> correct value for vm.overcommit_ratio or how postmaster allocate memory when\n> fork may be formula or something? If i get answers on this question i can\n> pick up vm.overcommit_ratio.\n\nYou are definitely running your server out of memory then. Can you\nthrow say 256G into it? It's usually worth every penny to throw memory\nat the problem. Reducing usage will help a lot for now tho.\n\n> 4)\n> About vm.swappiness totally agree and i turn it on for experiment goals,\n> because i have problem and my db freeze. I play with different kernel\n> setting try to pick up correct value. In the beginning i set it to 0 and\n> all works fine.\n\nThe linux kernel is crazy about swapping. even with swappinness set to\n0, it'll swap stuff out once it's gotten old. suddenly shared_buffers\nare on disk not in ram etc. On big memory machines (we use 128G to 1TB\nmemory at work) I just turn it off because the bigger the memory the\ndumber it seems to get.\n\n> 6) I set to this values\n> vm.dirty_bytes=67108864 this value equal my Smart Array E200i Cache Size.\n> vm.dirty_background_bytes = 16777216 - 1/4 from vm.dirty_bytes\n\nAhh I usually set the ratio for dirty_ratio to something small like 5\nto 10 and dirty_background_ratio to 1. The less bursty the dirty\nbackground stuff is the better in db land.\nYour numbers are fairly low assuming dirty bytes in in bytes and not\nkilobytes or something. :) I never use it do I'm not sure one way or\nthe other.\n\n> <Basically don't TRY to allocate all the memory, try to leave 75% or so\n> <free for the OS to allocate as buffers. After getting a baseline for\n> <performance under load then make bigger changes\n>\n> This means that i should set effective_cache_size to 75% of my RAM?\n\nThat is a reasonable number, especially once you get the machine to\nstop using so much memory for sorts and shared_buffers. The idea is\nthat when you look at free, after the db's been up for a day or two,\nyou should see 75% or so of your RAM allocated to cache / buffers.\n\nGood luck, keep us informed on your progress.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Nov 2013 09:35:52 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "As a followup to my previous message, here's a response curve on a 48\ncore server I used at my last job.\n\nhttps://picasaweb.google.com/lh/photo/aPYHPWPivPsS79fG3AKtZNMTjNZETYmyPJy0liipFm0?feat=directlink\n\nNote the peak at around 38 to 48 cores. This is the sweetspot on this\nserver for connections. If I allow this server to get to 68 to 70 hard\nworking connections my throughput drops by half. It's far better to\nhave inbound connections sit in a queue in a connection pooler and\nwait their turn than to have them all clobber this server at one time.\n\nOf course the issue here is active connections, not idle ones. So if\nyou have say 150 idle and 50 active connections on this server you'd\nbe doing fine. Until load started to climb. Then as the number of\nactive connections went past 50, it would get slower in a non-linear\nfashion. Since the throughput is half, at 70 or so connections each\nquery would now be 1/4th or so as fast as they had been when we had 35\nor so.\n\nSo it's a good idea to get some idea of where that sweet spot is on\nyour server and stay under it with a good pooler.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Nov 2013 09:46:24 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "Also also, the definitive page for postgres and dirty pages etc is here:\n\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nNot sure if it's out of date with more modern kernels. Maybe Greg will chime in.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Nov 2013 10:04:41 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Sat, Nov 2, 2013 at 1:54 PM, Евгений Селявка <[email protected]> wrote:\n> Please help with advice!\n>\n> Server\n> HP ProLiant BL460c G1\n>\n> Architecture: x86_64\n> CPU op-mode(s): 32-bit, 64-bit\n> Byte Order: Little Endian\n> CPU(s): 8\n> On-line CPU(s) list: 0-7\n> Thread(s) per core: 1\n> Core(s) per socket: 4\n> CPU socket(s): 2\n> NUMA node(s): 1\n> Vendor ID: GenuineIntel\n> CPU family: 6\n> Model: 23\n> Stepping: 6\n> CPU MHz: 3000.105\n> BogoMIPS: 6000.04\n> Virtualization: VT-x\n> L1d cache: 32K\n> L1i cache: 32K\n> L2 cache: 6144K\n> NUMA node0 CPU(s): 0-7\n>\n> 32GB RAM\n> [root@db3 ~]# numactl --hardware\n> available: 1 nodes (0)\n> node 0 cpus: 0 1 2 3 4 5 6 7\n> node 0 size: 32765 MB\n> node 0 free: 317 MB\n> node distances:\n> node 0\n> 0: 10\n>\n>\n> RAID1 2x146GB 10k rpm\n>\n> CentOS release 6.3 (Final)\n> Linux 2.6.32-279.11.1.el6.x86_64 #1 SMP x86_64 GNU/Linux\n>\n>\n> kernel.msgmnb = 65536\n> kernel.msgmax = 65536\n> kernel.shmmax = 68719476736\n> kernel.shmall = 4294967296\n> vm.swappiness = 30\n> vm.dirty_background_bytes = 67108864\n> vm.dirty_bytes = 536870912\n>\n>\n> PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n> 20120305 (Red Hat 4.4.6-4), 64-bit\n>\n> listen_addresses = '*'\n> port = 5433\n> max_connections = 350\n> shared_buffers = 8GB\n> temp_buffers = 64MB\n> max_prepared_transactions = 350\n> work_mem = 256MB\n> maintenance_work_mem = 1GB\n> max_stack_depth = 4MB\n> max_files_per_process = 5000\n> effective_io_concurrency = 2\n> wal_level = hot_standby\n> synchronous_commit = off\n> checkpoint_segments = 64\n> checkpoint_timeout = 15min\n> checkpoint_completion_target = 0.75\n> max_wal_senders = 4\n> wal_sender_delay = 100ms\n> wal_keep_segments = 128\n> random_page_cost = 3.0\n> effective_cache_size = 18GB\n> autovacuum = on\n> autovacuum_max_workers = 5\n> autovacuum_vacuum_threshold = 900\n> autovacuum_analyze_threshold = 350\n> autovacuum_vacuum_scale_factor = 0.1\n> autovacuum_analyze_scale_factor = 0.05\n> log_min_duration_statement = 500\n> deadlock_timeout = 1s\n>\n>\n> DB size is about 20GB. There is no high write activity on DB. But\n> periodically in postgresql log i see for example: \"select 1\" duration is\n> about 500-1000 ms.\n>\n> In this period of time response time from db terribly. This period of time\n> not bound with high traffic. It is not other app on the server. There is not\n> specific cron job on server.\n>\n> Our app written on java and use jdbc to connect to DB and internal pooling.\n> There is about 100 connection to DB. This is sar output:\n>\n> 12:00:01 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s\n> pgscand/s pgsteal/s %vmeff\n> 09:30:01 PM 73.17 302.72 134790.16 0.00 46809.73 0.00\n> 0.00 0.00 0.00\n> 09:35:01 PM 63.42 655.80 131740.74 0.00 46182.74 0.00\n> 0.00 0.00 0.00\n> 09:40:01 PM 76.87 400.62 122375.34 0.00 42096.27 0.00\n> 0.00 0.00 0.00\n> 09:45:01 PM 58.49 198.33 121922.86 0.00 42765.27 0.00\n> 0.00 0.00 0.00\n> 09:50:01 PM 52.21 485.45 136775.65 0.15 49098.65 0.00\n> 0.00 0.00 0.00\n> 09:55:01 PM 49.68 476.75 130159.24 0.00 45192.54 0.00\n> 0.00 0.00 0.00\n> 10:00:01 PM 41.35 295.34 118655.80 0.00 40786.52 0.00\n> 0.00 0.00 0.00\n> 10:05:01 PM 60.84 593.85 129890.83 0.00 44170.92 0.00\n> 0.00 0.00 0.00\n> 10:10:01 PM 52.08 471.36 132773.63 0.00 46019.13 0.00\n> 2.41 2.41 100.00\n> 10:15:01 PM 73.93 196.50 129384.21 0.33 45255.76 65.92\n> 1.19 66.87 99.64\n> 10:20:02 PM 70.35 473.16 121940.38 0.11 44061.52 81.95\n> 37.79 119.42 99.73\n> 10:25:01 PM 57.84 471.69 130583.33 0.01 46093.33 0.00\n> 0.00 0.00 0.00\n> 10:30:01 PM 52.91 321.62 119264.34 0.01 41748.19 0.00\n> 0.00 0.00 0.00\n> 10:35:01 PM 47.13 451.78 114625.62 0.02 40600.98 0.00\n> 0.00 0.00 0.00\n> 10:40:01 PM 48.96 472.41 102352.79 0.00 35402.17 0.00\n> 0.00 0.00 0.00\n> 10:45:01 PM 70.07 321.33 121423.02 0.00 43052.04 0.00\n> 0.00 0.00 0.00\n> 10:50:01 PM 46.78 479.95 128938.09 0.02 37864.07 116.64\n> 48.97 165.07 99.67\n> 10:55:02 PM 104.84 453.55 109189.98 0.00 37583.50 0.00\n> 0.00 0.00 0.00\n> 11:00:01 PM 46.23 248.75 107313.26 0.00 37278.10 0.00\n> 0.00 0.00 0.00\n> 11:05:01 PM 44.28 446.41 115598.61 0.01 40070.61 0.00\n> 0.00 0.00 0.00\n> 11:10:01 PM 38.86 457.32 100240.71 0.00 34407.29 0.00\n> 0.00 0.00 0.00\n> 11:15:01 PM 48.23 275.60 104780.84 0.00 36183.84 0.00\n> 0.00 0.00 0.00\n> 11:20:01 PM 92.74 432.49 114698.74 0.01 40413.14 0.00\n> 0.00 0.00 0.00\n> 11:25:01 PM 42.76 428.50 87769.28 0.00 29379.87 0.00\n> 0.00 0.00 0.00\n> 11:30:01 PM 36.83 260.34 85072.46 0.00 28234.50 0.00\n> 0.00 0.00 0.00\n> 11:35:01 PM 62.52 481.56 93150.67 0.00 31137.13 0.00\n> 0.00 0.00 0.00\n> 11:40:01 PM 43.50 459.10 90407.34 0.00 30241.70 0.00\n> 0.00 0.00 0.00\n>\n> 12:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit\n> %commit\n> 09:30:01 PM 531792 32345400 98.38 475504 29583340 10211064\n> 27.62\n> 09:35:01 PM 512096 32365096 98.44 475896 29608660 10200916\n> 27.59\n> 09:40:01 PM 455584 32421608 98.61 476276 29638952 10211652\n> 27.62\n> 09:45:01 PM 425744 32451448 98.71 476604 29662384 10206044\n> 27.60\n> 09:50:01 PM 380960 32496232 98.84 477004 29684296 10243704\n> 27.71\n> 09:55:01 PM 385644 32491548 98.83 477312 29706940 10204776\n> 27.60\n> 10:00:01 PM 348604 32528588 98.94 477672 29725476 10228984\n> 27.67\n> 10:05:01 PM 279216 32597976 99.15 478104 29751016 10281748\n> 27.81\n> 10:10:01 PM 255168 32622024 99.22 478220 29769924 10247404\n> 27.72\n> 10:15:01 PM 321188 32556004 99.02 475124 29721912 10234500\n> 27.68\n> 10:20:02 PM 441660 32435532 98.66 472336 29610476 10246288\n> 27.71\n> 10:25:01 PM 440636 32436556 98.66 472636 29634960 10219940\n> 27.64\n> 10:30:01 PM 469872 32407320 98.57 473016 29651476 10208520\n> 27.61\n> 10:35:01 PM 414540 32462652 98.74 473424 29672728 10223964\n> 27.65\n> 10:40:01 PM 354632 32522560 98.92 473772 29693016 10247752\n> 27.72\n> 10:45:01 PM 333708 32543484 98.98 474092 29720256 10227204\n> 27.66\n> 10:50:01 PM 528004 32349188 98.39 469396 29549832 10219536\n> 27.64\n> 10:55:02 PM 499068 32378124 98.48 469692 29587140 10204836\n> 27.60\n> 11:00:01 PM 462980 32414212 98.59 470032 29606764 10235820\n> 27.68\n> 11:05:01 PM 449540 32427652 98.63 470368 29626136 10209788\n> 27.61\n> 11:10:01 PM 419984 32457208 98.72 470772 29644248 10214480\n> 27.63\n> 11:15:01 PM 429900 32447292 98.69 471104 29664292 10202344\n> 27.59\n> 11:20:01 PM 394852 32482340 98.80 471528 29698052 10207604\n> 27.61\n> 11:25:01 PM 345328 32531864 98.95 471904 29717264 10215632\n> 27.63\n> 11:30:01 PM 368224 32508968 98.88 472236 29733544 10206468\n> 27.61\n> 11:35:01 PM 321800 32555392 99.02 472528 29758548 10211820\n> 27.62\n> 11:40:01 PM 282520 32594672 99.14 472860 29776952 10243516\n> 27.71\n>\n> 12:00:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n> await svctm %util\n> 09:30:01 PM dev253-5 66.29 146.33 483.33 9.50 6.27\n> 94.53 2.08 13.78\n> 09:35:01 PM dev253-5 154.80 126.85 1192.96 8.53 28.57\n> 184.59 1.45 22.43\n> 09:40:01 PM dev253-5 92.21 153.75 686.75 9.11 11.53\n> 125.00 1.87 17.21\n> 09:45:01 PM dev253-5 39.66 116.99 279.32 9.99 0.42\n> 10.66 2.61 10.36\n> 09:50:01 PM dev253-5 106.73 95.58 820.70 8.58 16.77\n> 157.12 1.68 17.88\n> 09:55:01 PM dev253-5 107.90 99.36 831.46 8.63 16.05\n> 148.76 1.71 18.42\n> 10:00:01 PM dev253-5 62.48 82.70 471.28 8.87 5.91\n> 94.52 2.10 13.11\n> 10:05:01 PM dev253-5 137.84 121.69 1064.03 8.60 24.48\n> 177.31 1.56 21.52\n> 10:10:01 PM dev253-5 107.93 104.16 827.83 8.64 16.69\n> 155.04 1.68 18.11\n> 10:15:01 PM dev253-5 40.55 126.12 277.57 9.96 0.41\n> 10.13 2.57 10.42\n> 10:20:02 PM dev253-5 104.33 136.77 793.49 8.92 16.97\n> 162.69 1.76 18.35\n> 10:25:01 PM dev253-5 108.04 115.36 825.26 8.71 16.68\n> 154.36 1.76 19.05\n> 10:30:01 PM dev253-5 69.72 105.66 523.05 9.02 7.45\n> 106.92 1.90 13.25\n> 10:35:01 PM dev253-5 101.58 91.59 781.85 8.60 15.00\n> 147.68 1.67 16.97\n> 10:40:01 PM dev253-5 107.50 97.91 827.17 8.61 17.68\n> 164.49 1.77 19.06\n> 10:45:01 PM dev253-5 69.98 140.13 519.57 9.43 7.09\n> 101.25 1.96 13.72\n> 10:50:01 PM dev253-5 104.30 83.31 806.12 8.53 16.18\n> 155.10 1.65 17.16\n> 10:55:02 PM dev253-5 106.86 209.65 790.27 9.36 15.59\n> 145.08 1.74 18.60\n> 11:00:01 PM dev253-5 50.42 92.08 371.52 9.19 3.05\n> 62.16 2.28 11.52\n> 11:05:01 PM dev253-5 101.06 88.31 776.57 8.56 15.12\n> 149.58 1.67 16.90\n> 11:10:01 PM dev253-5 103.08 77.73 798.23 8.50 17.14\n> 166.25 1.74 17.90\n> 11:15:01 PM dev253-5 57.74 96.45 428.62 9.09 5.23\n> 90.52 2.13 12.32\n> 11:20:01 PM dev253-5 97.73 185.18 727.38 9.34 14.64\n> 149.84 1.94 18.92\n> 11:25:01 PM dev253-5 95.03 85.52 730.31 8.58 14.42\n> 151.79 1.79 16.97\n> 11:30:01 PM dev253-5 53.76 73.65 404.47 8.89 3.94\n> 73.25 2.17 11.64\n> 11:35:01 PM dev253-5 110.37 125.05 842.17 8.76 16.96\n> 153.63 1.66 18.30\n> 11:40:01 PM dev253-5 103.93 87.00 801.59 8.55 16.01\n> 154.00 1.73 18.00\n>\n> As you can see there is no high io activity in this period of time but db is\n> frozen. My opinion that i have incorrect kernel setting and/or i have a\n> mistake in postgresql.conf. Because there is not high activity on db. load\n> avg is about 1. When there is high traffic is about 1.15. This is from\n> nagios monitoring system.\n>\n> But sometimes load is about 4 and this time matches with sar %vmeff = 100%\n> and database response time increase.\n\n\nNeed to see: iowait, system load.\n\nAlso consider installing perf and grabbing a profile while issue is happening.\n\nProbably this problem will go way with 2GB shared buffers, but before\ndoing that we'd like to diagnose this if possible.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Nov 2013 17:18:18 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Wed, Nov 6, 2013 at 8:35 AM, Scott Marlowe <[email protected]> wrote:\n> That's a mostly religious argument. I.e. you're going on feeling here\n> that pooling in jdbc alone is better than either jdbc/pgbouncer or\n> plain pgbouncer alone. My experience is that jdbc pooling is not in\n> the same category as pgbouncer for configuration and performance.\n> Either way, get that connection count down to something reasonable.\n>\n> Basically pgbouncer is veyr lightweight and can take thousands of\n> incoming connections and balance them into a few dozen connections to\n> the database.\n\nI've had a look at the pgbouncer docs, and it appears that there are 3\nmodes: session, transaction and statement.\n\nSession pooling appears to be the most conservative, but unless I am\nmissing something, I don't see how it will reduce the number of actual\ndatabase connections when used in between a JDBC connection pool?\n\nIf using Transaction pooling, it appears that you need to disable\nprepared statements in JDBC - any the FAQ stays you need to apply a\npatch to the JDBC driver to get it (admittedly, that FAQ appears that\nit might be out of date given the JDBC version it references).\n\n-Dave\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Nov 2013 00:51:15 -0800", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "Scott thank you for advice.\n\n> If you've got one job that needs lots of mem and lot of jobs that\n> don't, look at my recommendation to lower work_mem for all the low mem\n> requiring jobs. If you can split those heavy lifting jobs out to\n> another user, then you can use a pooler like pgbouncer to do admission\n> control by limiting that heavy lifter to a few connections at a time.\n> The rest will wait in line behind it.\n\nI will deacrease this parametr to 32MB because this DB cluster is for WEB\napplication, thats why there is notning to do heavyweight querys.\n\n> You are definitely running your server out of memory then. Can you\n> throw say 256G into it? It's usually worth every penny to throw memory\n> at the problem. Reducing usage will help a lot for now tho.\n\nUnfortunately no, all that i can grow up my memory for 72GB. If i set\nanother 32GB to server, what shared_buffer i should use 8GB, 2GB or 18GB\n1/4 from 72GB?\nIn this period of time i can set vm.overcommit_ratio=500 or 700 but this is\nvery dangerous i think. Because all process can allocate\n(RAM+SWAP)*vm.overcommit_ratio/100 as i understand?\n\nOnce again thank you very much for link, i am read about it and graph.\nAbout max_connection i reply later, now i calculate it.\n\n\n2013/11/6 Scott Marlowe <[email protected]>\n\n> Also also, the definitive page for postgres and dirty pages etc is here:\n>\n> http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n>\n> Not sure if it's out of date with more modern kernels. Maybe Greg will\n> chime in.\n>\n\n\n\n-- \nС уважением Селявка Евгений\n\nScott thank you for advice.> If you've got one job that needs lots of mem and lot of jobs that> don't, look at my recommendation to lower work_mem for all the low mem> requiring jobs. If you can split those heavy lifting jobs out to\n> another user, then you can use a pooler like pgbouncer to do admission> control by limiting that heavy lifter to a few connections at a time.> The rest will wait in line behind it.\nI will deacrease this parametr to 32MB because this DB cluster is for WEB application, thats why there is notning to do heavyweight querys. \n> You are definitely running your server out of memory then. Can you> throw say 256G into it? It's usually worth every penny to throw memory\n> at the problem. Reducing usage will help a lot for now tho.Unfortunately no, all that i can grow up my memory for 72GB. If i set another 32GB to server, what shared_buffer i should use 8GB, 2GB or 18GB 1/4 from 72GB?\nIn this period of time i can set vm.overcommit_ratio=500 or 700 but this is very dangerous i think. Because all process can allocate (RAM+SWAP)*vm.overcommit_ratio/100 as i understand?\n Once again thank you very much for link, i am read about it and graph. About max_connection i reply later, now i calculate it. \n2013/11/6 Scott Marlowe <[email protected]>\nAlso also, the definitive page for postgres and dirty pages etc is here:\n\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nNot sure if it's out of date with more modern kernels. Maybe Greg will chime in.\n-- С уважением Селявка Евгений", "msg_date": "Thu, 7 Nov 2013 13:33:52 +0400", "msg_from": "=?KOI8-R?B?5dfHxc7JyiDzxczR18vB?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "All my sar statistics\nsar -r\n11:40:02 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit\n%commit\n01:15:01 PM 269108 32608084 99.18 367144 29707240 10289444\n27.83\n01:20:01 PM 293560 32583632 99.11 367428 29674272 10287136\n27.82\n01:25:01 PM 417640 32459552 98.73 366148 29563220 10289220\n27.83\n01:30:01 PM 399448 32477744 98.79 366812 29596520 10298876\n27.85\n01:35:01 PM 432332 32444860 98.69 367412 29616524 10277484\n27.80\n\nsar -d -p\n11:40:02 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\nawait svctm %util\n01:15:01 PM vg_root-lv_pgsql 73.10 116.59 540.15 8.98\n6.98 95.44 1.61 11.78\n01:20:01 PM vg_root-lv_pgsql 71.39 170.21 508.21 9.50\n5.44 76.23 1.72 12.31\n01:25:01 PM vg_root-lv_pgsql 54.32 136.21 381.53 9.53\n3.58 65.98 1.81 9.85\n01:30:01 PM vg_root-lv_pgsql 81.35 167.98 585.25 9.26\n8.15 100.13 1.63 13.25\n01:35:01 PM vg_root-lv_pgsql 66.75 126.02 482.72 9.12\n5.59 83.73 1.78 11.90\n\nsar -u ALL\n11:40:02 AM CPU %usr %nice %sys %iowait %steal\n%irq %soft %guest %idle\n01:15:01 PM all 8.57 0.00 1.52 1.46 0.00\n0.00 0.05 0.00 88.40\n01:20:01 PM all 8.50 0.00 1.53 1.61 0.00\n0.00 0.05 0.00 88.31\n01:25:01 PM all 9.00 0.00 1.78 1.27 0.00\n0.00 0.06 0.00 87.89\n01:30:01 PM all 9.58 0.00 1.63 1.71 0.00\n0.00 0.06 0.00 87.01\n01:35:01 PM all 8.75 0.00 1.47 1.57 0.00\n0.00 0.06 0.00 88.15\n\nAs you say i install perf and get statistics with command\n\nperf record -g -f -u postgres -e\nblock:block_rq_*,syscalls:sys_enter_write,syscalls:sys_enter_fsync\n\nBut i really don't understand perf report, what values i need to see. Could\nyou help me with advice how to read perf report. What events from perf list\ni shoud trace, and what the good and bad values in this report depend of my\nhardware?\n\n\n\n\n2013/11/7 Merlin Moncure <[email protected]>\n\n> On Sat, Nov 2, 2013 at 1:54 PM, Евгений Селявка <[email protected]>\n> wrote:\n> > Please help with advice!\n> >\n> > Server\n> > HP ProLiant BL460c G1\n> >\n> > Architecture: x86_64\n> > CPU op-mode(s): 32-bit, 64-bit\n> > Byte Order: Little Endian\n> > CPU(s): 8\n> > On-line CPU(s) list: 0-7\n> > Thread(s) per core: 1\n> > Core(s) per socket: 4\n> > CPU socket(s): 2\n> > NUMA node(s): 1\n> > Vendor ID: GenuineIntel\n> > CPU family: 6\n> > Model: 23\n> > Stepping: 6\n> > CPU MHz: 3000.105\n> > BogoMIPS: 6000.04\n> > Virtualization: VT-x\n> > L1d cache: 32K\n> > L1i cache: 32K\n> > L2 cache: 6144K\n> > NUMA node0 CPU(s): 0-7\n> >\n> > 32GB RAM\n> > [root@db3 ~]# numactl --hardware\n> > available: 1 nodes (0)\n> > node 0 cpus: 0 1 2 3 4 5 6 7\n> > node 0 size: 32765 MB\n> > node 0 free: 317 MB\n> > node distances:\n> > node 0\n> > 0: 10\n> >\n> >\n> > RAID1 2x146GB 10k rpm\n> >\n> > CentOS release 6.3 (Final)\n> > Linux 2.6.32-279.11.1.el6.x86_64 #1 SMP x86_64 GNU/Linux\n> >\n> >\n> > kernel.msgmnb = 65536\n> > kernel.msgmax = 65536\n> > kernel.shmmax = 68719476736\n> > kernel.shmall = 4294967296\n> > vm.swappiness = 30\n> > vm.dirty_background_bytes = 67108864\n> > vm.dirty_bytes = 536870912\n> >\n> >\n> > PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n> > 20120305 (Red Hat 4.4.6-4), 64-bit\n> >\n> > listen_addresses = '*'\n> > port = 5433\n> > max_connections = 350\n> > shared_buffers = 8GB\n> > temp_buffers = 64MB\n> > max_prepared_transactions = 350\n> > work_mem = 256MB\n> > maintenance_work_mem = 1GB\n> > max_stack_depth = 4MB\n> > max_files_per_process = 5000\n> > effective_io_concurrency = 2\n> > wal_level = hot_standby\n> > synchronous_commit = off\n> > checkpoint_segments = 64\n> > checkpoint_timeout = 15min\n> > checkpoint_completion_target = 0.75\n> > max_wal_senders = 4\n> > wal_sender_delay = 100ms\n> > wal_keep_segments = 128\n> > random_page_cost = 3.0\n> > effective_cache_size = 18GB\n> > autovacuum = on\n> > autovacuum_max_workers = 5\n> > autovacuum_vacuum_threshold = 900\n> > autovacuum_analyze_threshold = 350\n> > autovacuum_vacuum_scale_factor = 0.1\n> > autovacuum_analyze_scale_factor = 0.05\n> > log_min_duration_statement = 500\n> > deadlock_timeout = 1s\n> >\n> >\n> > DB size is about 20GB. There is no high write activity on DB. But\n> > periodically in postgresql log i see for example: \"select 1\" duration is\n> > about 500-1000 ms.\n> >\n> > In this period of time response time from db terribly. This period of\n> time\n> > not bound with high traffic. It is not other app on the server. There is\n> not\n> > specific cron job on server.\n> >\n> > Our app written on java and use jdbc to connect to DB and internal\n> pooling.\n> > There is about 100 connection to DB. This is sar output:\n> >\n> > 12:00:01 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s\n> > pgscand/s pgsteal/s %vmeff\n> > 09:30:01 PM 73.17 302.72 134790.16 0.00 46809.73 0.00\n> > 0.00 0.00 0.00\n> > 09:35:01 PM 63.42 655.80 131740.74 0.00 46182.74 0.00\n> > 0.00 0.00 0.00\n> > 09:40:01 PM 76.87 400.62 122375.34 0.00 42096.27 0.00\n> > 0.00 0.00 0.00\n> > 09:45:01 PM 58.49 198.33 121922.86 0.00 42765.27 0.00\n> > 0.00 0.00 0.00\n> > 09:50:01 PM 52.21 485.45 136775.65 0.15 49098.65 0.00\n> > 0.00 0.00 0.00\n> > 09:55:01 PM 49.68 476.75 130159.24 0.00 45192.54 0.00\n> > 0.00 0.00 0.00\n> > 10:00:01 PM 41.35 295.34 118655.80 0.00 40786.52 0.00\n> > 0.00 0.00 0.00\n> > 10:05:01 PM 60.84 593.85 129890.83 0.00 44170.92 0.00\n> > 0.00 0.00 0.00\n> > 10:10:01 PM 52.08 471.36 132773.63 0.00 46019.13 0.00\n> > 2.41 2.41 100.00\n> > 10:15:01 PM 73.93 196.50 129384.21 0.33 45255.76 65.92\n> > 1.19 66.87 99.64\n> > 10:20:02 PM 70.35 473.16 121940.38 0.11 44061.52 81.95\n> > 37.79 119.42 99.73\n> > 10:25:01 PM 57.84 471.69 130583.33 0.01 46093.33 0.00\n> > 0.00 0.00 0.00\n> > 10:30:01 PM 52.91 321.62 119264.34 0.01 41748.19 0.00\n> > 0.00 0.00 0.00\n> > 10:35:01 PM 47.13 451.78 114625.62 0.02 40600.98 0.00\n> > 0.00 0.00 0.00\n> > 10:40:01 PM 48.96 472.41 102352.79 0.00 35402.17 0.00\n> > 0.00 0.00 0.00\n> > 10:45:01 PM 70.07 321.33 121423.02 0.00 43052.04 0.00\n> > 0.00 0.00 0.00\n> > 10:50:01 PM 46.78 479.95 128938.09 0.02 37864.07 116.64\n> > 48.97 165.07 99.67\n> > 10:55:02 PM 104.84 453.55 109189.98 0.00 37583.50 0.00\n> > 0.00 0.00 0.00\n> > 11:00:01 PM 46.23 248.75 107313.26 0.00 37278.10 0.00\n> > 0.00 0.00 0.00\n> > 11:05:01 PM 44.28 446.41 115598.61 0.01 40070.61 0.00\n> > 0.00 0.00 0.00\n> > 11:10:01 PM 38.86 457.32 100240.71 0.00 34407.29 0.00\n> > 0.00 0.00 0.00\n> > 11:15:01 PM 48.23 275.60 104780.84 0.00 36183.84 0.00\n> > 0.00 0.00 0.00\n> > 11:20:01 PM 92.74 432.49 114698.74 0.01 40413.14 0.00\n> > 0.00 0.00 0.00\n> > 11:25:01 PM 42.76 428.50 87769.28 0.00 29379.87 0.00\n> > 0.00 0.00 0.00\n> > 11:30:01 PM 36.83 260.34 85072.46 0.00 28234.50 0.00\n> > 0.00 0.00 0.00\n> > 11:35:01 PM 62.52 481.56 93150.67 0.00 31137.13 0.00\n> > 0.00 0.00 0.00\n> > 11:40:01 PM 43.50 459.10 90407.34 0.00 30241.70 0.00\n> > 0.00 0.00 0.00\n> >\n> > 12:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit\n> > %commit\n> > 09:30:01 PM 531792 32345400 98.38 475504 29583340 10211064\n> > 27.62\n> > 09:35:01 PM 512096 32365096 98.44 475896 29608660 10200916\n> > 27.59\n> > 09:40:01 PM 455584 32421608 98.61 476276 29638952 10211652\n> > 27.62\n> > 09:45:01 PM 425744 32451448 98.71 476604 29662384 10206044\n> > 27.60\n> > 09:50:01 PM 380960 32496232 98.84 477004 29684296 10243704\n> > 27.71\n> > 09:55:01 PM 385644 32491548 98.83 477312 29706940 10204776\n> > 27.60\n> > 10:00:01 PM 348604 32528588 98.94 477672 29725476 10228984\n> > 27.67\n> > 10:05:01 PM 279216 32597976 99.15 478104 29751016 10281748\n> > 27.81\n> > 10:10:01 PM 255168 32622024 99.22 478220 29769924 10247404\n> > 27.72\n> > 10:15:01 PM 321188 32556004 99.02 475124 29721912 10234500\n> > 27.68\n> > 10:20:02 PM 441660 32435532 98.66 472336 29610476 10246288\n> > 27.71\n> > 10:25:01 PM 440636 32436556 98.66 472636 29634960 10219940\n> > 27.64\n> > 10:30:01 PM 469872 32407320 98.57 473016 29651476 10208520\n> > 27.61\n> > 10:35:01 PM 414540 32462652 98.74 473424 29672728 10223964\n> > 27.65\n> > 10:40:01 PM 354632 32522560 98.92 473772 29693016 10247752\n> > 27.72\n> > 10:45:01 PM 333708 32543484 98.98 474092 29720256 10227204\n> > 27.66\n> > 10:50:01 PM 528004 32349188 98.39 469396 29549832 10219536\n> > 27.64\n> > 10:55:02 PM 499068 32378124 98.48 469692 29587140 10204836\n> > 27.60\n> > 11:00:01 PM 462980 32414212 98.59 470032 29606764 10235820\n> > 27.68\n> > 11:05:01 PM 449540 32427652 98.63 470368 29626136 10209788\n> > 27.61\n> > 11:10:01 PM 419984 32457208 98.72 470772 29644248 10214480\n> > 27.63\n> > 11:15:01 PM 429900 32447292 98.69 471104 29664292 10202344\n> > 27.59\n> > 11:20:01 PM 394852 32482340 98.80 471528 29698052 10207604\n> > 27.61\n> > 11:25:01 PM 345328 32531864 98.95 471904 29717264 10215632\n> > 27.63\n> > 11:30:01 PM 368224 32508968 98.88 472236 29733544 10206468\n> > 27.61\n> > 11:35:01 PM 321800 32555392 99.02 472528 29758548 10211820\n> > 27.62\n> > 11:40:01 PM 282520 32594672 99.14 472860 29776952 10243516\n> > 27.71\n> >\n> > 12:00:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n> > await svctm %util\n> > 09:30:01 PM dev253-5 66.29 146.33 483.33 9.50 6.27\n> > 94.53 2.08 13.78\n> > 09:35:01 PM dev253-5 154.80 126.85 1192.96 8.53 28.57\n> > 184.59 1.45 22.43\n> > 09:40:01 PM dev253-5 92.21 153.75 686.75 9.11 11.53\n> > 125.00 1.87 17.21\n> > 09:45:01 PM dev253-5 39.66 116.99 279.32 9.99 0.42\n> > 10.66 2.61 10.36\n> > 09:50:01 PM dev253-5 106.73 95.58 820.70 8.58 16.77\n> > 157.12 1.68 17.88\n> > 09:55:01 PM dev253-5 107.90 99.36 831.46 8.63 16.05\n> > 148.76 1.71 18.42\n> > 10:00:01 PM dev253-5 62.48 82.70 471.28 8.87 5.91\n> > 94.52 2.10 13.11\n> > 10:05:01 PM dev253-5 137.84 121.69 1064.03 8.60 24.48\n> > 177.31 1.56 21.52\n> > 10:10:01 PM dev253-5 107.93 104.16 827.83 8.64 16.69\n> > 155.04 1.68 18.11\n> > 10:15:01 PM dev253-5 40.55 126.12 277.57 9.96 0.41\n> > 10.13 2.57 10.42\n> > 10:20:02 PM dev253-5 104.33 136.77 793.49 8.92 16.97\n> > 162.69 1.76 18.35\n> > 10:25:01 PM dev253-5 108.04 115.36 825.26 8.71 16.68\n> > 154.36 1.76 19.05\n> > 10:30:01 PM dev253-5 69.72 105.66 523.05 9.02 7.45\n> > 106.92 1.90 13.25\n> > 10:35:01 PM dev253-5 101.58 91.59 781.85 8.60 15.00\n> > 147.68 1.67 16.97\n> > 10:40:01 PM dev253-5 107.50 97.91 827.17 8.61 17.68\n> > 164.49 1.77 19.06\n> > 10:45:01 PM dev253-5 69.98 140.13 519.57 9.43 7.09\n> > 101.25 1.96 13.72\n> > 10:50:01 PM dev253-5 104.30 83.31 806.12 8.53 16.18\n> > 155.10 1.65 17.16\n> > 10:55:02 PM dev253-5 106.86 209.65 790.27 9.36 15.59\n> > 145.08 1.74 18.60\n> > 11:00:01 PM dev253-5 50.42 92.08 371.52 9.19 3.05\n> > 62.16 2.28 11.52\n> > 11:05:01 PM dev253-5 101.06 88.31 776.57 8.56 15.12\n> > 149.58 1.67 16.90\n> > 11:10:01 PM dev253-5 103.08 77.73 798.23 8.50 17.14\n> > 166.25 1.74 17.90\n> > 11:15:01 PM dev253-5 57.74 96.45 428.62 9.09 5.23\n> > 90.52 2.13 12.32\n> > 11:20:01 PM dev253-5 97.73 185.18 727.38 9.34 14.64\n> > 149.84 1.94 18.92\n> > 11:25:01 PM dev253-5 95.03 85.52 730.31 8.58 14.42\n> > 151.79 1.79 16.97\n> > 11:30:01 PM dev253-5 53.76 73.65 404.47 8.89 3.94\n> > 73.25 2.17 11.64\n> > 11:35:01 PM dev253-5 110.37 125.05 842.17 8.76 16.96\n> > 153.63 1.66 18.30\n> > 11:40:01 PM dev253-5 103.93 87.00 801.59 8.55 16.01\n> > 154.00 1.73 18.00\n> >\n> > As you can see there is no high io activity in this period of time but\n> db is\n> > frozen. My opinion that i have incorrect kernel setting and/or i have a\n> > mistake in postgresql.conf. Because there is not high activity on db.\n> load\n> > avg is about 1. When there is high traffic is about 1.15. This is from\n> > nagios monitoring system.\n> >\n> > But sometimes load is about 4 and this time matches with sar %vmeff =\n> 100%\n> > and database response time increase.\n>\n>\n> Need to see: iowait, system load.\n>\n> Also consider installing perf and grabbing a profile while issue is\n> happening.\n>\n> Probably this problem will go way with 2GB shared buffers, but before\n> doing that we'd like to diagnose this if possible.\n>\n> merlin\n>\n\n\n\n-- \nС уважением Селявка Евгений\n\nAll my sar statisticssar -r 11:40:02 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit01:15:01 PM    269108  32608084     99.18    367144  29707240  10289444     27.83\n01:20:01 PM    293560  32583632     99.11    367428  29674272  10287136     27.8201:25:01 PM    417640  32459552     98.73    366148  29563220  10289220     27.8301:30:01 PM    399448  32477744     98.79    366812  29596520  10298876     27.85\n01:35:01 PM    432332  32444860     98.69    367412  29616524  10277484     27.80sar -d -p 11:40:02 AM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util01:15:01 PM vg_root-lv_pgsql     73.10    116.59    540.15      8.98      6.98     95.44      1.61     11.78\n01:20:01 PM vg_root-lv_pgsql     71.39    170.21    508.21      9.50      5.44     76.23      1.72     12.3101:25:01 PM vg_root-lv_pgsql     54.32    136.21    381.53      9.53      3.58     65.98      1.81      9.85\n01:30:01 PM vg_root-lv_pgsql     81.35    167.98    585.25      9.26      8.15    100.13      1.63     13.2501:35:01 PM vg_root-lv_pgsql     66.75    126.02    482.72      9.12      5.59     83.73      1.78     11.90\nsar -u ALL11:40:02 AM     CPU      %usr     %nice      %sys   %iowait    %steal      %irq     %soft    %guest     %idle01:15:01 PM     all      8.57      0.00      1.52      1.46      0.00      0.00      0.05      0.00     88.40\n01:20:01 PM     all      8.50      0.00      1.53      1.61      0.00      0.00      0.05      0.00     88.3101:25:01 PM     all      9.00      0.00      1.78      1.27      0.00      0.00      0.06      0.00     87.89\n01:30:01 PM     all      9.58      0.00      1.63      1.71      0.00      0.00      0.06      0.00     87.0101:35:01 PM     all      8.75      0.00      1.47      1.57      0.00      0.00      0.06      0.00     88.15\nAs you say i install perf and get statistics with commandperf record -g -f -u postgres -e block:block_rq_*,syscalls:sys_enter_write,syscalls:sys_enter_fsyncBut i really don't understand perf report, what values i need to see. Could you help me with advice how to read perf report. What events from perf list i shoud trace, and what the good and bad values in this report depend of my hardware?\n2013/11/7 Merlin Moncure <[email protected]>\nOn Sat, Nov 2, 2013 at 1:54 PM, Евгений Селявка <[email protected]> wrote:\n\n> Please help with advice!\n>\n> Server\n> HP ProLiant BL460c G1\n>\n> Architecture:          x86_64\n> CPU op-mode(s):        32-bit, 64-bit\n> Byte Order:            Little Endian\n> CPU(s):                8\n> On-line CPU(s) list:   0-7\n> Thread(s) per core:    1\n> Core(s) per socket:    4\n> CPU socket(s):         2\n> NUMA node(s):          1\n> Vendor ID:             GenuineIntel\n> CPU family:            6\n> Model:                 23\n> Stepping:              6\n> CPU MHz:               3000.105\n> BogoMIPS:              6000.04\n> Virtualization:        VT-x\n> L1d cache:             32K\n> L1i cache:             32K\n> L2 cache:              6144K\n> NUMA node0 CPU(s):     0-7\n>\n> 32GB RAM\n> [root@db3 ~]# numactl --hardware\n> available: 1 nodes (0)\n> node 0 cpus: 0 1 2 3 4 5 6 7\n> node 0 size: 32765 MB\n> node 0 free: 317 MB\n> node distances:\n> node   0\n>   0:  10\n>\n>\n> RAID1 2x146GB 10k rpm\n>\n> CentOS release 6.3 (Final)\n> Linux 2.6.32-279.11.1.el6.x86_64 #1 SMP x86_64 GNU/Linux\n>\n>\n> kernel.msgmnb = 65536\n> kernel.msgmax = 65536\n> kernel.shmmax = 68719476736\n> kernel.shmall = 4294967296\n> vm.swappiness = 30\n> vm.dirty_background_bytes = 67108864\n> vm.dirty_bytes = 536870912\n>\n>\n> PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n> 20120305 (Red Hat 4.4.6-4), 64-bit\n>\n> listen_addresses = '*'\n> port = 5433\n> max_connections = 350\n> shared_buffers = 8GB\n> temp_buffers = 64MB\n> max_prepared_transactions = 350\n> work_mem = 256MB\n> maintenance_work_mem = 1GB\n> max_stack_depth = 4MB\n> max_files_per_process = 5000\n> effective_io_concurrency = 2\n> wal_level = hot_standby\n> synchronous_commit = off\n> checkpoint_segments = 64\n> checkpoint_timeout = 15min\n> checkpoint_completion_target = 0.75\n> max_wal_senders = 4\n> wal_sender_delay = 100ms\n> wal_keep_segments = 128\n> random_page_cost = 3.0\n> effective_cache_size = 18GB\n> autovacuum = on\n> autovacuum_max_workers = 5\n> autovacuum_vacuum_threshold = 900\n> autovacuum_analyze_threshold = 350\n> autovacuum_vacuum_scale_factor = 0.1\n> autovacuum_analyze_scale_factor = 0.05\n> log_min_duration_statement = 500\n> deadlock_timeout = 1s\n>\n>\n> DB size is about 20GB. There is no high write activity on DB. But\n> periodically in postgresql log i see for example: \"select 1\" duration is\n> about 500-1000 ms.\n>\n> In this period of time response time from db terribly. This period of time\n> not bound with high traffic. It is not other app on the server. There is not\n> specific cron job on server.\n>\n> Our app written on java and use jdbc to connect to DB and internal pooling.\n> There is about 100 connection to DB. This is sar output:\n>\n> 12:00:01 AM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s\n> pgscand/s pgsteal/s    %vmeff\n> 09:30:01 PM     73.17    302.72 134790.16      0.00  46809.73      0.00\n> 0.00      0.00      0.00\n> 09:35:01 PM     63.42    655.80 131740.74      0.00  46182.74      0.00\n> 0.00      0.00      0.00\n> 09:40:01 PM     76.87    400.62 122375.34      0.00  42096.27      0.00\n> 0.00      0.00      0.00\n> 09:45:01 PM     58.49    198.33 121922.86      0.00  42765.27      0.00\n> 0.00      0.00      0.00\n> 09:50:01 PM     52.21    485.45 136775.65      0.15  49098.65      0.00\n> 0.00      0.00      0.00\n> 09:55:01 PM     49.68    476.75 130159.24      0.00  45192.54      0.00\n> 0.00      0.00      0.00\n> 10:00:01 PM     41.35    295.34 118655.80      0.00  40786.52      0.00\n> 0.00      0.00      0.00\n> 10:05:01 PM     60.84    593.85 129890.83      0.00  44170.92      0.00\n> 0.00      0.00      0.00\n> 10:10:01 PM     52.08    471.36 132773.63      0.00  46019.13      0.00\n> 2.41      2.41    100.00\n> 10:15:01 PM     73.93    196.50 129384.21      0.33  45255.76     65.92\n> 1.19     66.87     99.64\n> 10:20:02 PM     70.35    473.16 121940.38      0.11  44061.52     81.95\n> 37.79    119.42     99.73\n> 10:25:01 PM     57.84    471.69 130583.33      0.01  46093.33      0.00\n> 0.00      0.00      0.00\n> 10:30:01 PM     52.91    321.62 119264.34      0.01  41748.19      0.00\n> 0.00      0.00      0.00\n> 10:35:01 PM     47.13    451.78 114625.62      0.02  40600.98      0.00\n> 0.00      0.00      0.00\n> 10:40:01 PM     48.96    472.41 102352.79      0.00  35402.17      0.00\n> 0.00      0.00      0.00\n> 10:45:01 PM     70.07    321.33 121423.02      0.00  43052.04      0.00\n> 0.00      0.00      0.00\n> 10:50:01 PM     46.78    479.95 128938.09      0.02  37864.07    116.64\n> 48.97    165.07     99.67\n> 10:55:02 PM    104.84    453.55 109189.98      0.00  37583.50      0.00\n> 0.00      0.00      0.00\n> 11:00:01 PM     46.23    248.75 107313.26      0.00  37278.10      0.00\n> 0.00      0.00      0.00\n> 11:05:01 PM     44.28    446.41 115598.61      0.01  40070.61      0.00\n> 0.00      0.00      0.00\n> 11:10:01 PM     38.86    457.32 100240.71      0.00  34407.29      0.00\n> 0.00      0.00      0.00\n> 11:15:01 PM     48.23    275.60 104780.84      0.00  36183.84      0.00\n> 0.00      0.00      0.00\n> 11:20:01 PM     92.74    432.49 114698.74      0.01  40413.14      0.00\n> 0.00      0.00      0.00\n> 11:25:01 PM     42.76    428.50  87769.28      0.00  29379.87      0.00\n> 0.00      0.00      0.00\n> 11:30:01 PM     36.83    260.34  85072.46      0.00  28234.50      0.00\n> 0.00      0.00      0.00\n> 11:35:01 PM     62.52    481.56  93150.67      0.00  31137.13      0.00\n> 0.00      0.00      0.00\n> 11:40:01 PM     43.50    459.10  90407.34      0.00  30241.70      0.00\n> 0.00      0.00      0.00\n>\n> 12:00:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit\n> %commit\n> 09:30:01 PM    531792  32345400     98.38    475504  29583340  10211064\n> 27.62\n> 09:35:01 PM    512096  32365096     98.44    475896  29608660  10200916\n> 27.59\n> 09:40:01 PM    455584  32421608     98.61    476276  29638952  10211652\n> 27.62\n> 09:45:01 PM    425744  32451448     98.71    476604  29662384  10206044\n> 27.60\n> 09:50:01 PM    380960  32496232     98.84    477004  29684296  10243704\n> 27.71\n> 09:55:01 PM    385644  32491548     98.83    477312  29706940  10204776\n> 27.60\n> 10:00:01 PM    348604  32528588     98.94    477672  29725476  10228984\n> 27.67\n> 10:05:01 PM    279216  32597976     99.15    478104  29751016  10281748\n> 27.81\n> 10:10:01 PM    255168  32622024     99.22    478220  29769924  10247404\n> 27.72\n> 10:15:01 PM    321188  32556004     99.02    475124  29721912  10234500\n> 27.68\n> 10:20:02 PM    441660  32435532     98.66    472336  29610476  10246288\n> 27.71\n> 10:25:01 PM    440636  32436556     98.66    472636  29634960  10219940\n> 27.64\n> 10:30:01 PM    469872  32407320     98.57    473016  29651476  10208520\n> 27.61\n> 10:35:01 PM    414540  32462652     98.74    473424  29672728  10223964\n> 27.65\n> 10:40:01 PM    354632  32522560     98.92    473772  29693016  10247752\n> 27.72\n> 10:45:01 PM    333708  32543484     98.98    474092  29720256  10227204\n> 27.66\n> 10:50:01 PM    528004  32349188     98.39    469396  29549832  10219536\n> 27.64\n> 10:55:02 PM    499068  32378124     98.48    469692  29587140  10204836\n> 27.60\n> 11:00:01 PM    462980  32414212     98.59    470032  29606764  10235820\n> 27.68\n> 11:05:01 PM    449540  32427652     98.63    470368  29626136  10209788\n> 27.61\n> 11:10:01 PM    419984  32457208     98.72    470772  29644248  10214480\n> 27.63\n> 11:15:01 PM    429900  32447292     98.69    471104  29664292  10202344\n> 27.59\n> 11:20:01 PM    394852  32482340     98.80    471528  29698052  10207604\n> 27.61\n> 11:25:01 PM    345328  32531864     98.95    471904  29717264  10215632\n> 27.63\n> 11:30:01 PM    368224  32508968     98.88    472236  29733544  10206468\n> 27.61\n> 11:35:01 PM    321800  32555392     99.02    472528  29758548  10211820\n> 27.62\n> 11:40:01 PM    282520  32594672     99.14    472860  29776952  10243516\n> 27.71\n>\n> 12:00:01 AM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz\n> await     svctm     %util\n> 09:30:01 PM  dev253-5     66.29    146.33    483.33      9.50      6.27\n> 94.53      2.08     13.78\n> 09:35:01 PM  dev253-5    154.80    126.85   1192.96      8.53     28.57\n> 184.59      1.45     22.43\n> 09:40:01 PM  dev253-5     92.21    153.75    686.75      9.11     11.53\n> 125.00      1.87     17.21\n> 09:45:01 PM  dev253-5     39.66    116.99    279.32      9.99      0.42\n> 10.66      2.61     10.36\n> 09:50:01 PM  dev253-5    106.73     95.58    820.70      8.58     16.77\n> 157.12      1.68     17.88\n> 09:55:01 PM  dev253-5    107.90     99.36    831.46      8.63     16.05\n> 148.76      1.71     18.42\n> 10:00:01 PM  dev253-5     62.48     82.70    471.28      8.87      5.91\n> 94.52      2.10     13.11\n> 10:05:01 PM  dev253-5    137.84    121.69   1064.03      8.60     24.48\n> 177.31      1.56     21.52\n> 10:10:01 PM  dev253-5    107.93    104.16    827.83      8.64     16.69\n> 155.04      1.68     18.11\n> 10:15:01 PM  dev253-5     40.55    126.12    277.57      9.96      0.41\n> 10.13      2.57     10.42\n> 10:20:02 PM  dev253-5    104.33    136.77    793.49      8.92     16.97\n> 162.69      1.76     18.35\n> 10:25:01 PM  dev253-5    108.04    115.36    825.26      8.71     16.68\n> 154.36      1.76     19.05\n> 10:30:01 PM  dev253-5     69.72    105.66    523.05      9.02      7.45\n> 106.92      1.90     13.25\n> 10:35:01 PM  dev253-5    101.58     91.59    781.85      8.60     15.00\n> 147.68      1.67     16.97\n> 10:40:01 PM  dev253-5    107.50     97.91    827.17      8.61     17.68\n> 164.49      1.77     19.06\n> 10:45:01 PM  dev253-5     69.98    140.13    519.57      9.43      7.09\n> 101.25      1.96     13.72\n> 10:50:01 PM  dev253-5    104.30     83.31    806.12      8.53     16.18\n> 155.10      1.65     17.16\n> 10:55:02 PM  dev253-5    106.86    209.65    790.27      9.36     15.59\n> 145.08      1.74     18.60\n> 11:00:01 PM  dev253-5     50.42     92.08    371.52      9.19      3.05\n> 62.16      2.28     11.52\n> 11:05:01 PM  dev253-5    101.06     88.31    776.57      8.56     15.12\n> 149.58      1.67     16.90\n> 11:10:01 PM  dev253-5    103.08     77.73    798.23      8.50     17.14\n> 166.25      1.74     17.90\n> 11:15:01 PM  dev253-5     57.74     96.45    428.62      9.09      5.23\n> 90.52      2.13     12.32\n> 11:20:01 PM  dev253-5     97.73    185.18    727.38      9.34     14.64\n> 149.84      1.94     18.92\n> 11:25:01 PM  dev253-5     95.03     85.52    730.31      8.58     14.42\n> 151.79      1.79     16.97\n> 11:30:01 PM  dev253-5     53.76     73.65    404.47      8.89      3.94\n> 73.25      2.17     11.64\n> 11:35:01 PM  dev253-5    110.37    125.05    842.17      8.76     16.96\n> 153.63      1.66     18.30\n> 11:40:01 PM  dev253-5    103.93     87.00    801.59      8.55     16.01\n> 154.00      1.73     18.00\n>\n> As you can see there is no high io activity in this period of time but db is\n> frozen. My opinion that i have incorrect kernel setting and/or i have a\n> mistake in postgresql.conf. Because there is not high activity on db. load\n> avg is about 1. When there is high traffic is about 1.15. This is from\n> nagios monitoring system.\n>\n> But sometimes load is about 4 and this time matches with sar %vmeff = 100%\n> and database response time increase.\n\n\nNeed to see: iowait, system load.\n\nAlso consider installing perf and grabbing a profile while issue is happening.\n\nProbably this problem will go way with 2GB shared buffers, but before\ndoing that we'd like to diagnose this if possible.\n\nmerlin\n-- С уважением Селявка Евгений", "msg_date": "Thu, 7 Nov 2013 14:13:38 +0400", "msg_from": "=?KOI8-R?B?5dfHxc7JyiDzxczR18vB?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Sat, Nov 2, 2013 at 11:54 AM, Евгений Селявка <[email protected]> wrote:\n> DB size is about 20GB. There is no high write activity on DB. But\n> periodically in postgresql log i see for example: \"select 1\" duration is\n> about 500-1000 ms.\n>\n> In this period of time response time from db terribly. This period of time\n> not bound with high traffic. It is not other app on the server. There is not\n> specific cron job on server.\n\nHave you shown all the modified kernel settings? Don't you use huge\npages accidentally? It might be a transparent huge pages\ndefragmentation issue, the symptoms look similar.\n\nAnother thing that might cause it is network. Try to monitor it at the\ntime of these stalls.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 10 Nov 2013 16:20:35 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "Scott hi, i calculate all of my jdbc pool size. Maximum is 300 connections\nfrom components wich use jdbc. I don't think that this is a good idea use\npgbouncer, because our application using spring framework which using jdbc\nand prepared statement. I try to talk with our developer about disabling\nprepared statement in this framework, they don't want do this. Thats why i\nwill try to upgrade HW and buy CPU with more core as you say based on\nformula 3-4xcore. But most of this connection is idle. This is a web based\napp not a datawarehouse, thats why all this connection is lightwear.\n\nAbout my db freeze i set this kernel parameter:\necho 1048576 > /proc/sys/vm/min_free_kbytes\necho 80 > /proc/sys/vm/vfs_cache_pressure\n\nAnd my freeze intervals is steel smaller. I try to dig deeper.\n\n\n2013/11/6 Scott Marlowe <[email protected]>\n\n> Also also, the definitive page for postgres and dirty pages etc is here:\n>\n> http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n>\n> Not sure if it's out of date with more modern kernels. Maybe Greg will\n> chime in.\n>\n\n\n\n-- \nС уважением Селявка Евгений\n\nScott hi, i calculate all of my jdbc pool size. Maximum is 300 connections from components wich use jdbc. I don't think that this is a good idea use pgbouncer, because our application using spring framework which using jdbc and prepared statement. I try to talk with our developer about disabling prepared statement in this framework, they don't want do this. Thats why i will try to upgrade HW and buy CPU with more core as you say based on formula 3-4xcore. But most of this connection is idle. This is a web based app not a datawarehouse, thats why all this connection is lightwear. \nAbout my db freeze i set this kernel parameter:echo 1048576 > /proc/sys/vm/min_free_kbytesecho 80 > /proc/sys/vm/vfs_cache_pressureAnd my freeze intervals is steel smaller. I try to dig deeper.\n2013/11/6 Scott Marlowe <[email protected]>\nAlso also, the definitive page for postgres and dirty pages etc is here:\n\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nNot sure if it's out of date with more modern kernels. Maybe Greg will chime in.\n-- С уважением Селявка Евгений", "msg_date": "Mon, 11 Nov 2013 12:09:14 +0400", "msg_from": "=?KOI8-R?B?5dfHxc7JyiDzxczR18vB?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Mon, Nov 11, 2013 at 1:09 AM, Евгений Селявка <[email protected]> wrote:\n> Scott hi, i calculate all of my jdbc pool size. Maximum is 300 connections\n> from components wich use jdbc. I don't think that this is a good idea use\n> pgbouncer, because our application using spring framework which using jdbc\n> and prepared statement. I try to talk with our developer about disabling\n> prepared statement in this framework, they don't want do this. Thats why i\n> will try to upgrade HW and buy CPU with more core as you say based on\n> formula 3-4xcore. But most of this connection is idle. This is a web based\n> app not a datawarehouse, thats why all this connection is lightwear.\n>\n> About my db freeze i set this kernel parameter:\n> echo 1048576 > /proc/sys/vm/min_free_kbytes\n> echo 80 > /proc/sys/vm/vfs_cache_pressure\n>\n> And my freeze intervals is steel smaller. I try to dig deeper.\n\nwell you can hopefully reduce connections from jdbc pooling then. The\nfact that the connections are idle is good.\n\nThe problem you run into is what happens when things go into\n\"overload\" I.e. when the db server starts to slow down, more of those\nidle connections become not idle. If all 300 are then waiting on the\ndb server, it will slow to a crawl and eventually fall over.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 09:14:43 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Mon, Nov 11, 2013 at 09:14:43AM -0700, Scott Marlowe wrote:\n> On Mon, Nov 11, 2013 at 1:09 AM, Евгений Селявка <[email protected]> wrote:\n> > Scott hi, i calculate all of my jdbc pool size. Maximum is 300 connections\n> > from components wich use jdbc. I don't think that this is a good idea use\n> > pgbouncer, because our application using spring framework which using jdbc\n> > and prepared statement. I try to talk with our developer about disabling\n> > prepared statement in this framework, they don't want do this. Thats why i\n> > will try to upgrade HW and buy CPU with more core as you say based on\n> > formula 3-4xcore. But most of this connection is idle. This is a web based\n> > app not a datawarehouse, thats why all this connection is lightwear.\n> >\n> > About my db freeze i set this kernel parameter:\n> > echo 1048576 > /proc/sys/vm/min_free_kbytes\n> > echo 80 > /proc/sys/vm/vfs_cache_pressure\n> >\n> > And my freeze intervals is steel smaller. I try to dig deeper.\n> \n> well you can hopefully reduce connections from jdbc pooling then. The\n> fact that the connections are idle is good.\n> \n> The problem you run into is what happens when things go into\n> \"overload\" I.e. when the db server starts to slow down, more of those\n> idle connections become not idle. If all 300 are then waiting on the\n> db server, it will slow to a crawl and eventually fall over.\n> \n+1 I would definitely encourage the use of pgbouncer to map the 300 connections\nto a saner number that your DB can actually handle. We had a similar problem\nand very, very occasionally the server would \"lockup\". Once we put the\nresource management pooler in place, performance has been the same best-case\nand much, much better worse-case and NO lockups.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 10:26:31 -0600", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Sun, Nov 10, 2013 at 11:48 PM, Евгений Селявка\n<[email protected]> wrote:\n> Sergey, yes this is all of my kernel setting. I don't use THP intentionally. I think that i need a special library to use THP with postgresql like this http://code.google.com/p/pgcookbook/wiki/Database_Server_Configuration. This is my values for this kernel settings:\n\nThen it is definitely not THP.\n\nps. BTW, pgcookbook has been moved to GitHub several weeks ago\nhttps://github.com/grayhemp/pgcookbook.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 12:02:56 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Mon, Nov 11, 2013 at 8:14 AM, Scott Marlowe <[email protected]> wrote:\n> well you can hopefully reduce connections from jdbc pooling then. The\n> fact that the connections are idle is good.\n>\n> The problem you run into is what happens when things go into\n> \"overload\" I.e. when the db server starts to slow down, more of those\n> idle connections become not idle. If all 300 are then waiting on the\n> db server, it will slow to a crawl and eventually fall over.\n\n+1.\n\nTry to monitor your connections, for example like this\n\nwhile true; do\necho -n \"$(date): \"\npsql -XAt -c \"select count(1) from pg_stat_activity\"\nsleep 1\ndone > activity.log\n\nand its correlation with slowdowns.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 12:10:34 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "On Thu, Nov 7, 2013 at 2:13 AM, Евгений Селявка <[email protected]>wrote:\n\n> All my sar statistics\n>\n...\n\n> sar -u ALL\n> 11:40:02 AM CPU %usr %nice %sys %iowait %steal\n> %irq %soft %guest %idle\n> 01:15:01 PM all 8.57 0.00 1.52 1.46 0.00\n> 0.00 0.05 0.00 88.40\n> 01:20:01 PM all 8.50 0.00 1.53 1.61 0.00\n> 0.00 0.05 0.00 88.31\n> 01:25:01 PM all 9.00 0.00 1.78 1.27 0.00\n> 0.00 0.06 0.00 87.89\n> 01:30:01 PM all 9.58 0.00 1.63 1.71 0.00\n> 0.00 0.06 0.00 87.01\n> 01:35:01 PM all 8.75 0.00 1.47 1.57 0.00\n> 0.00 0.06 0.00 88.15\n>\n\n\nDid a freeze-up occur in there someplace? Otherwise, that is not not so\nuseful.\n\nYou should try to decrease the sar interval to 1 min if you can. The extra\noverhead is negligible and the extra information can be very valuable. I'd\nalso have something like \"vmstat 5\" running and capture that. Although\nperhaps one of the options to sar other than -u capture that same\ninformation, I know little of the other sar options.\n\nCheers,\n\nJeff\n\nOn Thu, Nov 7, 2013 at 2:13 AM, Евгений Селявка <[email protected]> wrote:\nAll my sar statistics... \nsar -u ALL11:40:02 AM     CPU      %usr     %nice      %sys   %iowait    %steal      %irq     %soft    %guest     %idle01:15:01 PM     all      8.57      0.00      1.52      1.46      0.00      0.00      0.05      0.00     88.40\n\n01:20:01 PM     all      8.50      0.00      1.53      1.61      0.00      0.00      0.05      0.00     88.3101:25:01 PM     all      9.00      0.00      1.78      1.27      0.00      0.00      0.06      0.00     87.89\n\n01:30:01 PM     all      9.58      0.00      1.63      1.71      0.00      0.00      0.06      0.00     87.0101:35:01 PM     all      8.75      0.00      1.47      1.57      0.00      0.00      0.06      0.00     88.15\nDid a freeze-up occur in there someplace?  Otherwise, that is not not so useful.You should try to decrease the sar interval to 1 min if you can.  The extra overhead is negligible and the extra information can be very valuable.  I'd also have something like \"vmstat 5\" running and capture that.  Although perhaps one of the options to sar other than -u capture that same information, I know little of the other sar options.\nCheers,Jeff", "msg_date": "Mon, 11 Nov 2013 12:58:02 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql recommendation memory" }, { "msg_contents": "Sergey i will try to monitor my pgsql activity for several days.\n\nScott about pooling connection. Yesterday i start read about spring\nimplementation of jdbc our app use dbcp implementation:\nhttp://commons.apache.org/proper/commons-dbcp/configuration.html\nSo i have this parameter in config\n <property name=\"maxActive\" value=\"15\" />\n <property name=\"maxIdle\" value=\"1\" />\n <property name=\"maxWait\" value=\"10000\" />\n <property name=\"validationQuery\" value=\"SELECT 1\" />\n <property name=\"removeAbandoned\" value=\"true\" />\n <property name=\"removeAbandonedTimeout\" value=\"60\" />\nAnd i have several app that initialize and use this driver. Then i\ncalculate all of this initializing datasource - result is 380 active\nconnections. I simple add all maxActive node directive from all app dbcp\nconfig xml. But as i write earlier this is about 100 concurrent connection\nwhen i do 'select count(1) from pg_stat_activity'. I think that inexpedient\nto install pgbouncer in front off db, or may be somebody in this list\nhave some experience with pgbouncer and dbcp? Why i don't want to use\npgbouncer:\n1. I should use session mode because transaction doesn't support prepared\nstatement.\n2. If i use session mode, i will have the same number max connection to DB,\nbecause dbcp open connection to pgbouncer pgbouncer to DB and nobody close\nthis connection, only dbcp first, if i understand all correct. So if\noverload happen i will have the same 380 heavyweight connection to DB and\nall breaks down?\n\nI think that i should correctly configure my dbcp pool config xml file.\n\n2013/11/12 Jeff Janes <[email protected]>\n\n> On Thu, Nov 7, 2013 at 2:13 AM, Евгений Селявка <[email protected]>wrote:\n>\n>> All my sar statistics\n>>\n> ...\n>\n>> sar -u ALL\n>> 11:40:02 AM CPU %usr %nice %sys %iowait\n>> %steal %irq %soft %guest %idle\n>> 01:15:01 PM all 8.57 0.00 1.52 1.46\n>> 0.00 0.00 0.05 0.00 88.40\n>> 01:20:01 PM all 8.50 0.00 1.53 1.61\n>> 0.00 0.00 0.05 0.00 88.31\n>> 01:25:01 PM all 9.00 0.00 1.78 1.27\n>> 0.00 0.00 0.06 0.00 87.89\n>> 01:30:01 PM all 9.58 0.00 1.63 1.71\n>> 0.00 0.00 0.06 0.00 87.01\n>> 01:35:01 PM all 8.75 0.00 1.47 1.57\n>> 0.00 0.00 0.06 0.00 88.15\n>>\n>\n>\n> Did a freeze-up occur in there someplace? Otherwise, that is not not so\n> useful.\n>\n> You should try to decrease the sar interval to 1 min if you can. The\n> extra overhead is negligible and the extra information can be very\n> valuable. I'd also have something like \"vmstat 5\" running and capture\n> that. Although perhaps one of the options to sar other than -u capture\n> that same information, I know little of the other sar options.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n\n-- \nС уважением Селявка Евгений\n\nSergey  i will try to monitor my pgsql activity for several days. Scott about pooling connection. Yesterday i start read about spring implementation of jdbc our app use dbcp implementation: http://commons.apache.org/proper/commons-dbcp/configuration.html\nSo i have this parameter in config        <property name=\"maxActive\" value=\"15\" />        <property name=\"maxIdle\" value=\"1\" />        <property name=\"maxWait\" value=\"10000\" />\n\n        <property name=\"validationQuery\" value=\"SELECT 1\" />        <property name=\"removeAbandoned\" value=\"true\" />        <property name=\"removeAbandonedTimeout\" value=\"60\" />\nAnd i have several app that initialize and use this driver. Then i calculate all of this initializing datasource - result is 380 active connections. I simple add all maxActive node directive from all app dbcp config xml. But as i write earlier this is about 100 concurrent connection when i do 'select count(1) from pg_stat_activity'. I think that inexpedient to install pgbouncer in front off  db, or may be somebody in this  list have some experience with pgbouncer and dbcp? Why i don't want to use pgbouncer:\n1. I should use session mode because transaction doesn't support prepared statement.2. If i use session mode, i will have the same number max connection to DB, because dbcp open connection to pgbouncer pgbouncer to DB and nobody close this connection, only dbcp first, if i understand all correct. So if overload happen i will have the same 380 heavyweight connection to DB and all breaks down?\nI think that i should correctly configure my dbcp pool config xml file.\n2013/11/12 Jeff Janes <[email protected]>\nOn Thu, Nov 7, 2013 at 2:13 AM, Евгений Селявка <[email protected]> wrote:\nAll my sar statistics... \nsar -u ALL11:40:02 AM     CPU      %usr     %nice      %sys   %iowait    %steal      %irq     %soft    %guest     %idle01:15:01 PM     all      8.57      0.00      1.52      1.46      0.00      0.00      0.05      0.00     88.40\n\n\n\n01:20:01 PM     all      8.50      0.00      1.53      1.61      0.00      0.00      0.05      0.00     88.3101:25:01 PM     all      9.00      0.00      1.78      1.27      0.00      0.00      0.06      0.00     87.89\n\n\n\n01:30:01 PM     all      9.58      0.00      1.63      1.71      0.00      0.00      0.06      0.00     87.0101:35:01 PM     all      8.75      0.00      1.47      1.57      0.00      0.00      0.06      0.00     88.15\nDid a freeze-up occur in there someplace?  Otherwise, that is not not so useful.You should try to decrease the sar interval to 1 min if you can.  The extra overhead is negligible and the extra information can be very valuable.  I'd also have something like \"vmstat 5\" running and capture that.  Although perhaps one of the options to sar other than -u capture that same information, I know little of the other sar options.\nCheers,Jeff\n-- С уважением Селявка Евгений", "msg_date": "Tue, 12 Nov 2013 12:41:01 +0400", "msg_from": "=?KOI8-R?B?5dfHxc7JyiDzxczR18vB?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql recommendation memory" } ]
[ { "msg_contents": "Hello all,\n\nI have one query running at ~ 7 seconds and I would like to know if it's\npossible to make it run faster, once this query runs lots of time in my\nexperiment.\n\nBasically the query return the topics of tweets published by users that the\nuser N follows and that are published between D1 and D2.\n\n*Query*:\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id =\nt.id\n WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n (SELECT followed_id FROM relationship WHERE follower_id = N)\nORDER BY tt.tweet_id;\n\n*Explain (Analyze, Buffers):*\n\n Sort (cost=3950701.24..3950708.22 rows=2793 width=20) (actual\ntime=24062.951..24064.475 rows=1640 loops=1)\n Sort Key: tt.tweet_id\n Sort Method: quicksort Memory: 97kB\n Buffers: shared hit=2390 read=32778\n I/O Timings: read=15118.402\n -> Nested Loop (cost=247.58..3950541.38 rows=2793 width=20) (actual\ntime=532.578..24057.319 rows=1640 loops=1)\n Buffers: shared hit=2387 read=32778\n I/O Timings: read=15118.402\n -> Hash Semi Join (cost=229.62..73239.03 rows=1361 width=8)\n(actual time=391.768..15132.889 rows=597 loops=1)\n Hash Cond: (t.user_id = relationship.followed_id)\n Buffers: shared hit=539 read=31862\n I/O Timings: read=6265.279\n -> Index Scan using tweet_creation_time_index on tweet t\n (cost=0.57..68869.39 rows=1472441 width=16) (actual time=82.752..11418.043\nrows=175\n9645 loops=1)\n Index Cond: ((creation_time >= '2013-05-05\n00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n00:00:00-03'::times\ntamp with time zone))\n Buffers: shared hit=534 read=31859\n I/O Timings: read=6193.764\n -> Hash (cost=227.12..227.12 rows=154 width=8) (actual\ntime=72.175..72.175 rows=106 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 3kB\n Buffers: shared hit=5 read=3\n I/O Timings: read=71.515\n -> Index Only Scan using relationship_id on\nrelationship (cost=0.42..227.12 rows=154 width=8) (actual\ntime=59.395..71.972 rows=106 loo\nps=1)\n Index Cond: (follower_id = 335093362)\n Heap Fetches: 0\n Buffers: shared hit=5 read=3\n I/O Timings: read=71.515\n -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\nrows=723 width=20) (actual time=14.909..14.917 rows=3 loops=597)\n Recheck Cond: (tweet_id = t.id)\n Buffers: shared hit=1848 read=916\n I/O Timings: read=8853.123\n -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\nrows=723 width=0) (actual time=9.793..9.793 rows=3 loops=597)\n Index Cond: (tweet_id = t.id)\n Buffers: shared hit=1764 read=631\n I/O Timings: read=5811.532\n Total runtime: 24066.145 ms\n(34 rows)\n\n\n\n*Table structure*:\n\n Table \"public.tweet\"\n Column | Type | Modifiers |\nStorage | Stats target | Description\n-----------------------+--------------------------------------+--------------+-------------+------------------+-----------------\n id | bigint | not null\n | plain | |\n content | text |\n | extended | |\n creation_time | timestamp with time zone | | plain |\n |\n user_id | bigint |\n | plain | |\n retweeted | bigint | |\nplain | |\n retweet_count | integer | |\nplain | |\nIndexes:\n \"tweet_plk\" PRIMARY KEY, btree (id) CLUSTER\n \"tweet_creation_time_index\" btree (creation_time)\n \"tweet_id_index\" hash (id)\n \"tweet_ios_index\" btree (id, user_id, creation_time)\n \"tweet_retweeted_idx\" hash (retweeted)\n \"tweet_user_id_creation_time_index\" btree (creation_time, user_id)\n \"tweet_user_id_index\" hash (user_id)\n\n*System Information*:\nOS: Slackware 14.0\nPostgresql Version: *9.3 Beta2*\n\n*postgresql.conf Settings:*\n\nwork_mem = 128MB\nshared_buffers = 1GB\nmaintenance_work_mem = 1536MB\nfsync = off\nsynchronous_commit = off\neffective_cache_size = 2GB\n\n*Additional information:*\n\nAll tables in this database are read only tables. I haven't post the\ndetails about other tables to not let the email big, as it seems the\nproblem is with the 'tweet' table.\n\nAny help would be much appreciated.\nBest regards,\nCaio Casimiro.\n\nHello all,I have one query running at ~ 7 seconds and I would like to know if it's possible to make it run faster, once this query runs lots of time in my experiment.\nBasically the query return the topics of tweets published by users that the user N follows and that are published between D1 and D2.Query:\nSELECT tt.tweet_id, tt.topic, tt.topic_value            FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id = t.id\n            WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in            (SELECT followed_id FROM relationship WHERE follower_id = N) ORDER BY tt.tweet_id;\nExplain (Analyze, Buffers): Sort  (cost=3950701.24..3950708.22 rows=2793 width=20) (actual time=24062.951..24064.475 rows=1640 loops=1)   Sort Key: tt.tweet_id\n   Sort Method: quicksort  Memory: 97kB   Buffers: shared hit=2390 read=32778   I/O Timings: read=15118.402   ->  Nested Loop  (cost=247.58..3950541.38 rows=2793 width=20) (actual time=532.578..24057.319 rows=1640 loops=1)\n         Buffers: shared hit=2387 read=32778         I/O Timings: read=15118.402         ->  Hash Semi Join  (cost=229.62..73239.03 rows=1361 width=8) (actual time=391.768..15132.889 rows=597 loops=1)\n               Hash Cond: (t.user_id = relationship.followed_id)               Buffers: shared hit=539 read=31862               I/O Timings: read=6265.279               ->  Index Scan using tweet_creation_time_index on tweet t  (cost=0.57..68869.39 rows=1472441 width=16) (actual time=82.752..11418.043 rows=175\n9645 loops=1)                     Index Cond: ((creation_time >= '2013-05-05 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06 00:00:00-03'::timestamp with time zone))\n                     Buffers: shared hit=534 read=31859                     I/O Timings: read=6193.764               ->  Hash  (cost=227.12..227.12 rows=154 width=8) (actual time=72.175..72.175 rows=106 loops=1)\n                     Buckets: 1024  Batches: 1  Memory Usage: 3kB                     Buffers: shared hit=5 read=3                     I/O Timings: read=71.515                     ->  Index Only Scan using relationship_id on relationship  (cost=0.42..227.12 rows=154 width=8) (actual time=59.395..71.972 rows=106 loo\nps=1)                           Index Cond: (follower_id = 335093362)                           Heap Fetches: 0                           Buffers: shared hit=5 read=3                           I/O Timings: read=71.515\n         ->  Bitmap Heap Scan on tweet_topic tt  (cost=17.96..2841.63 rows=723 width=20) (actual time=14.909..14.917 rows=3 loops=597)               Recheck Cond: (tweet_id = t.id)\n               Buffers: shared hit=1848 read=916               I/O Timings: read=8853.123               ->  Bitmap Index Scan on tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0) (actual time=9.793..9.793 rows=3 loops=597)\n                     Index Cond: (tweet_id = t.id)                     Buffers: shared hit=1764 read=631                     I/O Timings: read=5811.532\n Total runtime: 24066.145 ms\n(34 rows)Table structure:                                     Table \"public.tweet\"\n    Column        |           Type                     | Modifiers | Storage  | Stats target | Description -----------------------+--------------------------------------+--------------+-------------+------------------+-----------------\n id                   | bigint                               | not null    | plain    |              |  content           | text                                 |                | extended |              | \n creation_time  | timestamp with time zone |                | plain    |              |  user_id           | bigint                               |                | plain    |              |  retweeted       | bigint                               |                | plain    |              | \n retweet_count | integer                             |                | plain    |              | Indexes:    \"tweet_plk\" PRIMARY KEY, btree (id) CLUSTER    \"tweet_creation_time_index\" btree (creation_time)\n    \"tweet_id_index\" hash (id)    \"tweet_ios_index\" btree (id, user_id, creation_time)    \"tweet_retweeted_idx\" hash (retweeted)    \"tweet_user_id_creation_time_index\" btree (creation_time, user_id)\n    \"tweet_user_id_index\" hash (user_id)System Information:OS: Slackware 14.0Postgresql Version: 9.3 Beta2\npostgresql.conf Settings:work_mem = 128MBshared_buffers = 1GBmaintenance_work_mem = 1536MBfsync = offsynchronous_commit = off\neffective_cache_size = 2GBAdditional information:All tables in this database are read only tables. I haven't post the details about other tables to not let the email big, as it seems the problem is with the 'tweet' table.\nAny help would be much appreciated.Best regards,Caio Casimiro.", "msg_date": "Sun, 3 Nov 2013 20:05:16 -0200", "msg_from": "Caio Casimiro <[email protected]>", "msg_from_op": true, "msg_subject": "Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "Caio Casimiro <[email protected]> wrote:\n\n> I have one query running at ~ 7 seconds and I would like to know\n> if it's possible to make it run faster, once this query runs lots\n> of time in my experiment.\n\n>   Buffers: shared hit=2390 read=32778\n\n> Total runtime: 24066.145 ms\n\n> effective_cache_size = 2GB\n\n> it seems the problem is with the 'tweet' table.\n\nThe EXPLAIN ANALYZE output shows it taking 24 seconds, 8.9 seconds\nof which is in accessing the tweet_topic table and 15.1 seconds in\naccessing the tweet table.  It looks like you have a painfully low\ncache hit ratio.  The plan looks reasonable to me; it looks like\nyou need more RAM to cache data if you want better speed.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Nov 2013 10:56:01 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "On 2013-11-04 13:56, Kevin Grittner wrote:\n> Caio Casimiro <[email protected]> wrote:\n>\n>> I have one query running at ~ 7 seconds and I would like to know\n>> if it's possible to make it run faster, once this query runs lots\n>> of time in my experiment.\n>> Buffers: shared hit=2390 read=32778\n>> Total runtime: 24066.145 ms\n>> effective_cache_size = 2GB\n>> it seems the problem is with the 'tweet' table.\n> The EXPLAIN ANALYZE output shows it taking 24 seconds, 8.9 seconds\n> of which is in accessing the tweet_topic table and 15.1 seconds in\n> accessing the tweet table. It looks like you have a painfully low\n> cache hit ratio. The plan looks reasonable to me; it looks like\n> you need more RAM to cache data if you want better speed.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\nThere's also an index scan that turns up 1.8 million rows, but only \n1,600 of them wind up in the final output. I'd start with restating the \n\"user_id in (select followed_id ...)\" as a join against the relationship \ntable. The planner is filtering first on the tweet time, but that \ndoesn't reduce the set of tweets down very well. Assuming that the user \nbeing looked up doesn't follow a large proportion of other users, I'd \nfigure that reducing the set first by followed users should be quicker.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 04 Nov 2013 14:03:45 -0500", "msg_from": "Elliot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]>wrote:\n\n> Hello all,\n>\n> I have one query running at ~ 7 seconds and I would like to know if it's\n> possible to make it run faster, once this query runs lots of time in my\n> experiment.\n>\n\n\nDo you mean you want it to be fast because it runs many times, or that you\nwant it to become fast after it runs many times (i.e. once the data is\nfully cached)? The plan you show takes 24 seconds, not 7 seconds.\n\n\n>\n> Basically the query return the topics of tweets published by users that\n> the user N follows and that are published between D1 and D2.\n>\n> *Query*:\n>\n> SELECT tt.tweet_id, tt.topic, tt.topic_value\n> FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id =\n> t.id\n> WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n> (SELECT followed_id FROM relationship WHERE follower_id = N)\n> ORDER BY tt.tweet_id;\n>\n\n\nI don't know if this affects the plan at all, but it is silly to do a left\njoin to \"tweet\" when the WHERE clause has conditions that can't be\nsatisfied with a null row. Also, you could try changing the IN-list to an\nEXISTS subquery.\n\nIs there some patterns to D1 and D2 that could help the caching? For\nexample, are they both usually in the just-recent past?\n\n\nIndexes:\n> \"tweet_plk\" PRIMARY KEY, btree (id) CLUSTER\n> \"tweet_creation_time_index\" btree (creation_time)\n> \"tweet_id_index\" hash (id)\n> \"tweet_ios_index\" btree (id, user_id, creation_time)\n> \"tweet_retweeted_idx\" hash (retweeted)\n> \"tweet_user_id_creation_time_index\" btree (creation_time, user_id)\n> \"tweet_user_id_index\" hash (user_id)\n>\n\n\nAre all of those indexes important? If your table is heavily\nupdated/inserted, which I assume it is, maintaining those indexes is going\nto take up precious RAM that could probably be better used elsewhere.\n\nCheers,\n\nJeff\n\nOn Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]> wrote:\nHello all,I have one query running at ~ 7 seconds and I would like to know if it's possible to make it run faster, once this query runs lots of time in my experiment.\nDo you mean you want it to be fast because it runs many times, or that you want it to become fast after it runs many times (i.e. once the data is fully cached)?  The plan you show takes 24 seconds, not 7 seconds.\n \nBasically the query return the topics of tweets published by users that the user N follows and that are published between D1 and D2.Query:\nSELECT tt.tweet_id, tt.topic, tt.topic_value            FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id = t.id\n            WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in            (SELECT followed_id FROM relationship WHERE follower_id = N) ORDER BY tt.tweet_id;\nI don't know if this affects the plan at all, but it is silly to do a left join to \"tweet\" when the WHERE clause has conditions that can't be satisfied with a null row.  Also, you could try changing the IN-list to an EXISTS subquery.\nIs there some patterns to D1 and D2 that could help the caching?  For example, are they both usually in the just-recent past?\nIndexes:    \"tweet_plk\" PRIMARY KEY, btree (id) CLUSTER    \"tweet_creation_time_index\" btree (creation_time)\n    \"tweet_id_index\" hash (id)    \"tweet_ios_index\" btree (id, user_id, creation_time)    \"tweet_retweeted_idx\" hash (retweeted)    \"tweet_user_id_creation_time_index\" btree (creation_time, user_id)\n    \"tweet_user_id_index\" hash (user_id)Are all of those indexes important?  If your table is heavily updated/inserted, which I assume it is, maintaining those indexes is going to take up precious RAM that could probably be better used elsewhere.\n Cheers,Jeff", "msg_date": "Mon, 4 Nov 2013 11:15:32 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "Thank you very much for your answers guys!\n\n\nOn Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\n\n> On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]>\n> wrote:\n>\n>> Hello all,\n>>\n>> I have one query running at ~ 7 seconds and I would like to know if it's\n>> possible to make it run faster, once this query runs lots of time in my\n>> experiment.\n>>\n>\n>\n> Do you mean you want it to be fast because it runs many times, or that you\n> want it to become fast after it runs many times (i.e. once the data is\n> fully cached)? The plan you show takes 24 seconds, not 7 seconds.\n>\n\nI want it to be fast because it runs many times. I have an experiment that\nevaluates recommendation algorithms for a set of twitter users. This query\nreturns recommendation candidates so it is called a lot of times for\ndifferent users and time intervals.\n\n\n>\n>\n>>\n>> Basically the query return the topics of tweets published by users that\n>> the user N follows and that are published between D1 and D2.\n>>\n>> *Query*:\n>>\n>> SELECT tt.tweet_id, tt.topic, tt.topic_value\n>> FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id =\n>> t.id\n>> WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n>> (SELECT followed_id FROM relationship WHERE follower_id = N)\n>> ORDER BY tt.tweet_id;\n>>\n>\n>\n> I don't know if this affects the plan at all, but it is silly to do a left\n> join to \"tweet\" when the WHERE clause has conditions that can't be\n> satisfied with a null row. Also, you could try changing the IN-list to an\n> EXISTS subquery.\n>\n\nI'm sorry the ignorance, but I don't understand the issue with the left\njoin, could you explain more?\n\n\n> Is there some patterns to D1 and D2 that could help the caching? For\n> example, are they both usually in the just-recent past?\n>\nThe only pattern is that it is always a one day interval, e.g. D1 =\n'2013-05-01' and D2 = '2013-05-02'.\n\n>\n>\n> Indexes:\n>> \"tweet_plk\" PRIMARY KEY, btree (id) CLUSTER\n>> \"tweet_creation_time_index\" btree (creation_time)\n>> \"tweet_id_index\" hash (id)\n>> \"tweet_ios_index\" btree (id, user_id, creation_time)\n>> \"tweet_retweeted_idx\" hash (retweeted)\n>> \"tweet_user_id_creation_time_index\" btree (creation_time, user_id)\n>> \"tweet_user_id_index\" hash (user_id)\n>>\n>\n>\n> Are all of those indexes important? If your table is heavily\n> updated/inserted, which I assume it is, maintaining those indexes is going\n> to take up precious RAM that could probably be better used elsewhere.\n>\n\nProbably not. But once this database is read only, the quantity of index\ngrew following my desperation. =)\n\n>\n> Cheers,\n>\n> Jeff\n>\n\nThank you very much again!\nCaio\n\nThank you very much for your answers guys!\nOn Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\nOn Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]> wrote:\nHello all,\nI have one query running at ~ 7 seconds and I would like to know if it's possible to make it run faster, once this query runs lots of time in my experiment.\nDo you mean you want it to be fast because it runs many times, or that you want it to become fast after it runs many times (i.e. once the data is fully cached)?  The plan you show takes 24 seconds, not 7 seconds.\nI want it to be fast because it runs many times. I have an experiment that evaluates recommendation algorithms  for a set of twitter users. This query returns recommendation candidates so it is called a lot of times for different users and time intervals.\n \n \nBasically the query return the topics of tweets published by users that the user N follows and that are published between D1 and D2.Query:SELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id = t.id            WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n            (SELECT followed_id FROM relationship WHERE follower_id = N) ORDER BY tt.tweet_id;I don't know if this affects the plan at all, but it is silly to do a left join to \"tweet\" when the WHERE clause has conditions that can't be satisfied with a null row.  Also, you could try changing the IN-list to an EXISTS subquery.\nI'm sorry the ignorance, but I don't understand the issue with the left join, could you explain more?\nIs there some patterns to D1 and D2 that could help the caching?  For example, are they both usually in the just-recent past?\nThe only pattern is that it is always a one day interval, e.g. D1 = '2013-05-01' and  D2 = '2013-05-02'.\n\nIndexes:    \"tweet_plk\" PRIMARY KEY, btree (id) CLUSTER    \"tweet_creation_time_index\" btree (creation_time)    \"tweet_id_index\" hash (id)\n    \"tweet_ios_index\" btree (id, user_id, creation_time)    \"tweet_retweeted_idx\" hash (retweeted)    \"tweet_user_id_creation_time_index\" btree (creation_time, user_id)\n    \"tweet_user_id_index\" hash (user_id)Are all of those indexes important?  If your table is heavily updated/inserted, which I assume it is, maintaining those indexes is going to take up precious RAM that could probably be better used elsewhere.\nProbably not. But once this database is read only, the quantity of index grew following my desperation. =)\n Cheers,Jeff\nThank you very much again!Caio", "msg_date": "Mon, 4 Nov 2013 18:44:05 -0200", "msg_from": "Caio Casimiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "I should also say that table tweet has more than 400 millions hows and\ntable tweet_topic has estimated more than 800 millions rows.\n\nThanks again,\nCaio\n\n\nOn Mon, Nov 4, 2013 at 6:44 PM, Caio Casimiro <[email protected]>wrote:\n\n> Thank you very much for your answers guys!\n>\n>\n> On Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\n>\n>> On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]>\n>> wrote:\n>>\n>>> Hello all,\n>>>\n>>> I have one query running at ~ 7 seconds and I would like to know if it's\n>>> possible to make it run faster, once this query runs lots of time in my\n>>> experiment.\n>>>\n>>\n>>\n>> Do you mean you want it to be fast because it runs many times, or that\n>> you want it to become fast after it runs many times (i.e. once the data is\n>> fully cached)? The plan you show takes 24 seconds, not 7 seconds.\n>>\n>\n> I want it to be fast because it runs many times. I have an experiment that\n> evaluates recommendation algorithms for a set of twitter users. This query\n> returns recommendation candidates so it is called a lot of times for\n> different users and time intervals.\n>\n>\n>>\n>>\n>>>\n>>> Basically the query return the topics of tweets published by users that\n>>> the user N follows and that are published between D1 and D2.\n>>>\n>>> *Query*:\n>>>\n>>> SELECT tt.tweet_id, tt.topic, tt.topic_value\n>>> FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id =\n>>> t.id\n>>> WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n>>> (SELECT followed_id FROM relationship WHERE follower_id = N)\n>>> ORDER BY tt.tweet_id;\n>>>\n>>\n>>\n>> I don't know if this affects the plan at all, but it is silly to do a\n>> left join to \"tweet\" when the WHERE clause has conditions that can't be\n>> satisfied with a null row. Also, you could try changing the IN-list to an\n>> EXISTS subquery.\n>>\n>\n> I'm sorry the ignorance, but I don't understand the issue with the left\n> join, could you explain more?\n>\n>\n>> Is there some patterns to D1 and D2 that could help the caching? For\n>> example, are they both usually in the just-recent past?\n>>\n> The only pattern is that it is always a one day interval, e.g. D1 =\n> '2013-05-01' and D2 = '2013-05-02'.\n>\n>>\n>>\n>> Indexes:\n>>> \"tweet_plk\" PRIMARY KEY, btree (id) CLUSTER\n>>> \"tweet_creation_time_index\" btree (creation_time)\n>>> \"tweet_id_index\" hash (id)\n>>> \"tweet_ios_index\" btree (id, user_id, creation_time)\n>>> \"tweet_retweeted_idx\" hash (retweeted)\n>>> \"tweet_user_id_creation_time_index\" btree (creation_time, user_id)\n>>> \"tweet_user_id_index\" hash (user_id)\n>>>\n>>\n>>\n>> Are all of those indexes important? If your table is heavily\n>> updated/inserted, which I assume it is, maintaining those indexes is going\n>> to take up precious RAM that could probably be better used elsewhere.\n>>\n>\n> Probably not. But once this database is read only, the quantity of index\n> grew following my desperation. =)\n>\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>\n> Thank you very much again!\n> Caio\n>\n\nI should also say that table tweet has more than 400 millions hows and table tweet_topic has estimated more than 800 millions rows.Thanks again,Caio\nOn Mon, Nov 4, 2013 at 6:44 PM, Caio Casimiro <[email protected]> wrote:\nThank you very much for your answers guys!\nOn Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\n\nOn Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]> wrote:\nHello all,\nI have one query running at ~ 7 seconds and I would like to know if it's possible to make it run faster, once this query runs lots of time in my experiment.\n\nDo you mean you want it to be fast because it runs many times, or that you want it to become fast after it runs many times (i.e. once the data is fully cached)?  The plan you show takes 24 seconds, not 7 seconds.\nI want it to be fast because it runs many times. I have an experiment that evaluates recommendation algorithms  for a set of twitter users. This query returns recommendation candidates so it is called a lot of times for different users and time intervals.\n \n \n\nBasically the query return the topics of tweets published by users that the user N follows and that are published between D1 and D2.Query:\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id = t.id            WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n            (SELECT followed_id FROM relationship WHERE follower_id = N) ORDER BY tt.tweet_id;I don't know if this affects the plan at all, but it is silly to do a left join to \"tweet\" when the WHERE clause has conditions that can't be satisfied with a null row.  Also, you could try changing the IN-list to an EXISTS subquery.\nI'm sorry the ignorance, but I don't understand the issue with the left join, could you explain more?\n\nIs there some patterns to D1 and D2 that could help the caching?  For example, are they both usually in the just-recent past?\n\nThe only pattern is that it is always a one day interval, e.g. D1 = '2013-05-01' and  D2 = '2013-05-02'.\n\nIndexes:    \"tweet_plk\" PRIMARY KEY, btree (id) CLUSTER    \"tweet_creation_time_index\" btree (creation_time)    \"tweet_id_index\" hash (id)\n    \"tweet_ios_index\" btree (id, user_id, creation_time)    \"tweet_retweeted_idx\" hash (retweeted)    \"tweet_user_id_creation_time_index\" btree (creation_time, user_id)\n    \"tweet_user_id_index\" hash (user_id)Are all of those indexes important?  If your table is heavily updated/inserted, which I assume it is, maintaining those indexes is going to take up precious RAM that could probably be better used elsewhere.\nProbably not. But once this database is read only, the quantity of index grew following my desperation. =)\n Cheers,Jeff\n\nThank you very much again!Caio", "msg_date": "Mon, 4 Nov 2013 18:50:25 -0200", "msg_from": "Caio Casimiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Caio Casimiro\nSent: Monday, November 04, 2013 3:44 PM\nTo: Jeff Janes\nCc: [email protected]\nSubject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field\n\nThank you very much for your answers guys!\n\nOn Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\nOn Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]> wrote:\nHello all,\n\nI have one query running at ~ 7 seconds and I would like to know if it's possible to make it run faster, once this query runs lots of time in my experiment.\n\n\nDo you mean you want it to be fast because it runs many times, or that you want it to become fast after it runs many times (i.e. once the data is fully cached)?  The plan you show takes 24 seconds, not 7 seconds.\n\nI want it to be fast because it runs many times. I have an experiment that evaluates recommendation algorithms  for a set of twitter users. This query returns recommendation candidates so it is called a lot of times for different users and time intervals.\n \n \n\nBasically the query return the topics of tweets published by users that the user N follows and that are published between D1 and D2.\n\nQuery:\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id = t.id\n            WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n            (SELECT followed_id FROM relationship WHERE follower_id = N) ORDER BY tt.tweet_id;\n\n\nI don't know if this affects the plan at all, but it is silly to do a left join to \"tweet\" when the WHERE clause has conditions that can't be satisfied with a null row.  Also, you could try changing the IN-list to an EXISTS subquery.\n\nI'm sorry the ignorance, but I don't understand the issue with the left join, could you explain more?\n...........................................\nThank you very much again!\nCaio\n\n\nJust try the following:\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n FROM tweet_topic AS tt JOIN tweet AS t ON (tt.tweet_id = t.id\n\t\t\t\t\t AND t.creation_time BETWEEN 'D1' AND 'D2' AND t.user_id in\n \t\t\t\t (SELECT followed_id FROM relationship WHERE follower_id = N))\n ORDER BY tt.tweet_id;\n\nAnd see if it helps with performance.\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Nov 2013 20:52:57 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "Hi Neyman, thank you for your answer.\n\nUnfortunately this query runs almost at the same time:\n\nSort (cost=4877693.98..4877702.60 rows=3449 width=20) (actual\ntime=25820.291..25821.845 rows=1640 loops=1)\n Sort Key: tt.tweet_id\n Sort Method: quicksort Memory: 97kB\n Buffers: shared hit=1849 read=32788\n -> Nested Loop (cost=247.58..4877491.32 rows=3449 width=20) (actual\ntime=486.839..25814.120 rows=1640 loops=1)\n Buffers: shared hit=1849 read=32788\n -> Hash Semi Join (cost=229.62..88553.23 rows=1681 width=8)\n(actual time=431.654..13209.159 rows=597 loops=1)\n Hash Cond: (t.user_id = relationship.followed_id)\n Buffers: shared hit=3 read=31870\n -> Index Scan using tweet_creation_time_index on tweet t\n (cost=0.57..83308.25 rows=1781234 width=16) (actual\ntime=130.144..10037.764 rows=1759645 loops=1)\n Index Cond: ((creation_time >= '2013-05-05\n00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n00:00:00-03'::timestamp with time zone))\n Buffers: shared hit=1 read=31867\n -> Hash (cost=227.12..227.12 rows=154 width=8) (actual\ntime=94.365..94.365 rows=106 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 3kB\n Buffers: shared hit=2 read=3\n -> Index Only Scan using relationship_id on\nrelationship (cost=0.42..227.12 rows=154 width=8) (actual\ntime=74.540..94.101 rows=106 loops=1)\n Index Cond: (follower_id = 335093362)\n Heap Fetches: 0\n Buffers: shared hit=2 read=3\n -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\nrows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n Recheck Cond: (tweet_id = t.id)\n Buffers: shared hit=1846 read=918\n -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\nrows=723 width=0) (actual time=15.012..15.012 rows=3 loops=597)\n Index Cond: (tweet_id = t.id)\n Buffers: shared hit=1763 read=632\nTotal runtime: 25823.386 ms\n\nI have noticed that in both queries the index scan on\ntweet_creation_time_index is very expensive. Is there anything I can do to\nmake the planner choose a index only scan?\n\nThank you,\nCaio\n\n\n\nOn Mon, Nov 4, 2013 at 6:52 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n> From: [email protected] [mailto:\n> [email protected]] On Behalf Of Caio Casimiro\n> Sent: Monday, November 04, 2013 3:44 PM\n> To: Jeff Janes\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field\n>\n> Thank you very much for your answers guys!\n>\n> On Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\n> On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]\n> > wrote:\n> Hello all,\n>\n> I have one query running at ~ 7 seconds and I would like to know if it's\n> possible to make it run faster, once this query runs lots of time in my\n> experiment.\n>\n>\n> Do you mean you want it to be fast because it runs many times, or that you\n> want it to become fast after it runs many times (i.e. once the data is\n> fully cached)? The plan you show takes 24 seconds, not 7 seconds.\n>\n> I want it to be fast because it runs many times. I have an experiment that\n> evaluates recommendation algorithms for a set of twitter users. This query\n> returns recommendation candidates so it is called a lot of times for\n> different users and time intervals.\n>\n>\n>\n> Basically the query return the topics of tweets published by users that\n> the user N follows and that are published between D1 and D2.\n>\n> Query:\n>\n> SELECT tt.tweet_id, tt.topic, tt.topic_value\n> FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id =\n> t.id\n> WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n> (SELECT followed_id FROM relationship WHERE follower_id = N)\n> ORDER BY tt.tweet_id;\n>\n>\n> I don't know if this affects the plan at all, but it is silly to do a left\n> join to \"tweet\" when the WHERE clause has conditions that can't be\n> satisfied with a null row. Also, you could try changing the IN-list to an\n> EXISTS subquery.\n>\n> I'm sorry the ignorance, but I don't understand the issue with the left\n> join, could you explain more?\n> ...........................................\n> Thank you very much again!\n> Caio\n>\n>\n> Just try the following:\n>\n> SELECT tt.tweet_id, tt.topic, tt.topic_value\n> FROM tweet_topic AS tt JOIN tweet AS t ON (tt.tweet_id = t.id\n> AND t.creation_time\n> BETWEEN 'D1' AND 'D2' AND t.user_id in\n> (SELECT followed_id FROM\n> relationship WHERE follower_id = N))\n> ORDER BY tt.tweet_id;\n>\n> And see if it helps with performance.\n>\n> Regards,\n> Igor Neyman\n>\n>\n\nHi Neyman, thank you for your answer.Unfortunately this query runs almost at the same time:Sort  (cost=4877693.98..4877702.60 rows=3449 width=20) (actual time=25820.291..25821.845 rows=1640 loops=1)\n  Sort Key: tt.tweet_id  Sort Method: quicksort  Memory: 97kB  Buffers: shared hit=1849 read=32788  ->  Nested Loop  (cost=247.58..4877491.32 rows=3449 width=20) (actual time=486.839..25814.120 rows=1640 loops=1)\n        Buffers: shared hit=1849 read=32788        ->  Hash Semi Join  (cost=229.62..88553.23 rows=1681 width=8) (actual time=431.654..13209.159 rows=597 loops=1)              Hash Cond: (t.user_id = relationship.followed_id)\n              Buffers: shared hit=3 read=31870              ->  Index Scan using tweet_creation_time_index on tweet t  (cost=0.57..83308.25 rows=1781234 width=16) (actual time=130.144..10037.764 rows=1759645 loops=1)\n                    Index Cond: ((creation_time >= '2013-05-05 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06 00:00:00-03'::timestamp with time zone))                    Buffers: shared hit=1 read=31867\n              ->  Hash  (cost=227.12..227.12 rows=154 width=8) (actual time=94.365..94.365 rows=106 loops=1)                    Buckets: 1024  Batches: 1  Memory Usage: 3kB                    Buffers: shared hit=2 read=3\n                    ->  Index Only Scan using relationship_id on relationship  (cost=0.42..227.12 rows=154 width=8) (actual time=74.540..94.101 rows=106 loops=1)                          Index Cond: (follower_id = 335093362)\n                          Heap Fetches: 0                          Buffers: shared hit=2 read=3        ->  Bitmap Heap Scan on tweet_topic tt  (cost=17.96..2841.63 rows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n              Recheck Cond: (tweet_id = t.id)              Buffers: shared hit=1846 read=918              ->  Bitmap Index Scan on tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0) (actual time=15.012..15.012 rows=3 loops=597)\n                    Index Cond: (tweet_id = t.id)                    Buffers: shared hit=1763 read=632Total runtime: 25823.386 msI have noticed that in both queries the index scan on tweet_creation_time_index is very expensive. Is there anything I can do to make the planner choose a index only scan?\nThank you,CaioOn Mon, Nov 4, 2013 at 6:52 PM, Igor Neyman <[email protected]> wrote:\n\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Caio Casimiro\n\nSent: Monday, November 04, 2013 3:44 PM\nTo: Jeff Janes\nCc: [email protected]\nSubject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field\n\nThank you very much for your answers guys!\n\nOn Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\nOn Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]> wrote:\nHello all,\n\nI have one query running at ~ 7 seconds and I would like to know if it's possible to make it run faster, once this query runs lots of time in my experiment.\n\n\nDo you mean you want it to be fast because it runs many times, or that you want it to become fast after it runs many times (i.e. once the data is fully cached)?  The plan you show takes 24 seconds, not 7 seconds.\n\nI want it to be fast because it runs many times. I have an experiment that evaluates recommendation algorithms  for a set of twitter users. This query returns recommendation candidates so it is called a lot of times for different users and time intervals.\n\n \n \n\nBasically the query return the topics of tweets published by users that the user N follows and that are published between D1 and D2.\n\nQuery:\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id = t.id\n            WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n            (SELECT followed_id FROM relationship WHERE follower_id = N) ORDER BY tt.tweet_id;\n\n\nI don't know if this affects the plan at all, but it is silly to do a left join to \"tweet\" when the WHERE clause has conditions that can't be satisfied with a null row.  Also, you could try changing the IN-list to an EXISTS subquery.\n\nI'm sorry the ignorance, but I don't understand the issue with the left join, could you explain more?\n...........................................\nThank you very much again!\nCaio\n\n\nJust try the following:\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt  JOIN tweet AS t ON (tt.tweet_id = t.id\n                                                  AND t.creation_time BETWEEN 'D1' AND 'D2' AND t.user_id in\n                                         (SELECT followed_id FROM relationship WHERE follower_id = N))\n ORDER BY tt.tweet_id;\n\nAnd see if it helps with performance.\n\nRegards,\nIgor Neyman", "msg_date": "Mon, 4 Nov 2013 19:10:15 -0200", "msg_from": "Caio Casimiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "On 2013-11-04 16:10, Caio Casimiro wrote:\n> Hi Neyman, thank you for your answer.\n>\n> Unfortunately this query runs almost at the same time:\n>\n> Sort (cost=4877693.98..4877702.60 rows=3449 width=20) (actual \n> time=25820.291..25821.845 rows=1640 loops=1)\n> Sort Key: tt.tweet_id\n> Sort Method: quicksort Memory: 97kB\n> Buffers: shared hit=1849 read=32788\n> -> Nested Loop (cost=247.58..4877491.32 rows=3449 width=20) \n> (actual time=486.839..25814.120 rows=1640 loops=1)\n> Buffers: shared hit=1849 read=32788\n> -> Hash Semi Join (cost=229.62..88553.23 rows=1681 width=8) \n> (actual time=431.654..13209.159 rows=597 loops=1)\n> Hash Cond: (t.user_id = relationship.followed_id)\n> Buffers: shared hit=3 read=31870\n> -> Index Scan using tweet_creation_time_index on tweet \n> t (cost=0.57..83308.25 rows=1781234 width=16) (actual \n> time=130.144..10037.764 rows=1759645 loops=1)\n> Index Cond: ((creation_time >= '2013-05-05 \n> 00:00:00-03'::timestamp with time zone) AND (creation_time <= \n> '2013-05-06 00:00:00-03'::timestamp with time zone))\n> Buffers: shared hit=1 read=31867\n> -> Hash (cost=227.12..227.12 rows=154 width=8) (actual \n> time=94.365..94.365 rows=106 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 3kB\n> Buffers: shared hit=2 read=3\n> -> Index Only Scan using relationship_id on \n> relationship (cost=0.42..227.12 rows=154 width=8) (actual \n> time=74.540..94.101 rows=106 loops=1)\n> Index Cond: (follower_id = 335093362)\n> Heap Fetches: 0\n> Buffers: shared hit=2 read=3\n> -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63 \n> rows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n> Recheck Cond: (tweet_id = t.id <http://t.id>)\n> Buffers: shared hit=1846 read=918\n> -> Bitmap Index Scan on tweet_topic_pk \n> (cost=0.00..17.78 rows=723 width=0) (actual time=15.012..15.012 \n> rows=3 loops=597)\n> Index Cond: (tweet_id = t.id <http://t.id>)\n> Buffers: shared hit=1763 read=632\n> Total runtime: 25823.386 ms\n>\n> I have noticed that in both queries the index scan on \n> tweet_creation_time_index is very expensive. Is there anything I can \n> do to make the planner choose a index only scan?\n>\n>\nYes, because that part of the query is kicking back so many rows, many \nof which are totally unnecessary anyway - you're first getting all the \ntweets in a particular time range, then limiting them down to just users \nthat are followed. Here's clarification on the approach I mentioned \nearlier. All you should really need are basic (btree) indexes on your \ndifferent keys (tweet_topic.tweet_id, tweet.id, tweet.user_id, \nrelationship.follower_id, relationship.followed_id). I also changed the \nleft join to an inner join as somebody pointed out that your logic \namounted to reducing the match to an inner join anyway.\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\nFROM tweet_topic AS tt\n JOIN tweet AS t\n ON tt.tweet_id = t.id\n join relationship\n on t.user_id = relationship.followed_id\nWHERE creation_time BETWEEN 'D1' AND 'D2'\n AND relationship.follower_id = N\nORDER BY tt.tweet_id\n;\n\n\n\n\n\n\n\nOn 2013-11-04 16:10, Caio Casimiro\n wrote:\n\n\nHi Neyman, thank you for your answer.\n\nUnfortunately this query runs almost at the same time:\n\n\n\n\nSort  (cost=4877693.98..4877702.60 rows=3449 width=20)\n (actual time=25820.291..25821.845 rows=1640 loops=1)\n  Sort Key: tt.tweet_id\n  Sort Method: quicksort  Memory: 97kB\n  Buffers: shared hit=1849 read=32788\n  ->  Nested Loop  (cost=247.58..4877491.32\n rows=3449 width=20) (actual time=486.839..25814.120\n rows=1640 loops=1)\n        Buffers: shared hit=1849 read=32788\n        ->  Hash Semi Join  (cost=229.62..88553.23\n rows=1681 width=8) (actual time=431.654..13209.159\n rows=597 loops=1)\n              Hash Cond: (t.user_id =\n relationship.followed_id)\n              Buffers: shared hit=3 read=31870\n              ->  Index Scan using\n tweet_creation_time_index on tweet t  (cost=0.57..83308.25\n rows=1781234 width=16) (actual time=130.144..10037.764\n rows=1759645 loops=1)\n                    Index Cond: ((creation_time >=\n '2013-05-05 00:00:00-03'::timestamp with time zone) AND\n (creation_time <= '2013-05-06 00:00:00-03'::timestamp\n with time zone))\n                    Buffers: shared hit=1 read=31867\n              ->  Hash  (cost=227.12..227.12\n rows=154 width=8) (actual time=94.365..94.365 rows=106\n loops=1)\n                    Buckets: 1024  Batches: 1  Memory\n Usage: 3kB\n                    Buffers: shared hit=2 read=3\n                    ->  Index Only Scan using\n relationship_id on relationship  (cost=0.42..227.12\n rows=154 width=8) (actual time=74.540..94.101 rows=106\n loops=1)\n                          Index Cond: (follower_id =\n 335093362)\n                          Heap Fetches: 0\n                          Buffers: shared hit=2 read=3\n        ->  Bitmap Heap Scan on tweet_topic tt\n  (cost=17.96..2841.63 rows=723 width=20) (actual\n time=21.014..21.085 rows=3 loops=597)\n              Recheck Cond: (tweet_id = t.id)\n              Buffers: shared hit=1846 read=918\n              ->  Bitmap Index Scan on\n tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0)\n (actual time=15.012..15.012 rows=3 loops=597)\n                    Index Cond: (tweet_id = t.id)\n                    Buffers: shared hit=1763 read=632\nTotal runtime: 25823.386 ms\n\n\n\n\nI have noticed that in both queries the index scan on\n tweet_creation_time_index is very expensive. Is there anything\n I can do to make the planner choose a index only scan?\n\n\n\n\n\n Yes, because that part of the query is kicking back so many rows,\n many of which are totally unnecessary anyway - you're first getting\n all the tweets in a particular time range, then limiting them down\n to just users that are followed. Here's clarification on the\n approach I mentioned earlier. All you should really need are basic\n (btree) indexes on your different keys (tweet_topic.tweet_id,\n tweet.id, tweet.user_id, relationship.follower_id,\n relationship.followed_id). I also changed the left join to an inner\n join as somebody pointed out that your logic amounted to reducing\n the match to an inner join anyway. \n\n SELECT tt.tweet_id, tt.topic, tt.topic_value\n FROM tweet_topic AS tt\n   JOIN tweet AS t\n     ON tt.tweet_id = t.id\n   join relationship\n     on t.user_id = relationship.followed_id\n WHERE creation_time BETWEEN 'D1' AND 'D2'\n   AND relationship.follower_id = N\n ORDER BY tt.tweet_id\n ;", "msg_date": "Mon, 04 Nov 2013 16:22:47 -0500", "msg_from": "Elliot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "From: Caio Casimiro [mailto:[email protected]] \nSent: Monday, November 04, 2013 4:10 PM\nTo: Igor Neyman\nCc: Jeff Janes; [email protected]\nSubject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field\n\nHi Neyman, thank you for your answer.\nUnfortunately this query runs almost at the same time:\n\nSort  (cost=4877693.98..4877702.60 rows=3449 width=20) (actual time=25820.291..25821.845 rows=1640 loops=1)\n  Sort Key: tt.tweet_id\n  Sort Method: quicksort  Memory: 97kB\n  Buffers: shared hit=1849 read=32788\n  ->  Nested Loop  (cost=247.58..4877491.32 rows=3449 width=20) (actual time=486.839..25814.120 rows=1640 loops=1)\n        Buffers: shared hit=1849 read=32788\n        ->  Hash Semi Join  (cost=229.62..88553.23 rows=1681 width=8) (actual time=431.654..13209.159 rows=597 loops=1)\n              Hash Cond: (t.user_id = relationship.followed_id)\n              Buffers: shared hit=3 read=31870\n              ->  Index Scan using tweet_creation_time_index on tweet t  (cost=0.57..83308.25 rows=1781234 width=16) (actual time=130.144..10037.764 rows=1759645 loops=1)\n                    Index Cond: ((creation_time >= '2013-05-05 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06 00:00:00-03'::timestamp with time zone))\n                    Buffers: shared hit=1 read=31867\n              ->  Hash  (cost=227.12..227.12 rows=154 width=8) (actual time=94.365..94.365 rows=106 loops=1)\n                    Buckets: 1024  Batches: 1  Memory Usage: 3kB\n                    Buffers: shared hit=2 read=3\n                    ->  Index Only Scan using relationship_id on relationship  (cost=0.42..227.12 rows=154 width=8) (actual time=74.540..94.101 rows=106 loops=1)\n                          Index Cond: (follower_id = 335093362)\n                          Heap Fetches: 0\n                          Buffers: shared hit=2 read=3\n        ->  Bitmap Heap Scan on tweet_topic tt  (cost=17.96..2841.63 rows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n              Recheck Cond: (tweet_id = t.id)\n              Buffers: shared hit=1846 read=918\n              ->  Bitmap Index Scan on tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0) (actual time=15.012..15.012 rows=3 loops=597)\n                    Index Cond: (tweet_id = t.id)\n                    Buffers: shared hit=1763 read=632\nTotal runtime: 25823.386 ms\n\nI have noticed that in both queries the index scan on tweet_creation_time_index is very expensive. Is there anything I can do to make the planner choose a index only scan?\n\nThank you,\nCaio\n\nJust try the following:\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt  JOIN tweet AS t ON (tt.tweet_id = t.id\n                                                  AND t.creation_time BETWEEN 'D1' AND 'D2' AND t.user_id in\n                                         (SELECT followed_id FROM relationship WHERE follower_id = N))\n ORDER BY tt.tweet_id;\n\nAnd see if it helps with performance.\n\nRegards,\nIgor Neyman\n\nWhat is your hardware configuration, and Postgres config parameters modified from default values?\n\nRegards,\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Nov 2013 21:26:09 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "These are the parameters I have set in postgresql.conf:\n\nwork_mem = 128MB\nshared_buffers = 1GB\nmaintenance_work_mem = 1536MB\nfsync = off\nsynchronous_commit = off\neffective_cache_size = 2GB\n\nThe hardware is a modest one:\nCPU: Intel(R) Atom(TM) CPU 230 @ 1.60GHz\nRAM: 2GB\nHD: 1TV 7200 RPM (WDC WD10EZEX-00RKKA0)\n\nThis machine runs a slackware 14.0 dedicated to the Postgresql.\n\nThank you,\nCaio\n\n\n\nOn Mon, Nov 4, 2013 at 7:26 PM, Igor Neyman <[email protected]> wrote:\n\n> From: Caio Casimiro [mailto:[email protected]]\n> Sent: Monday, November 04, 2013 4:10 PM\n> To: Igor Neyman\n> Cc: Jeff Janes; [email protected]\n> Subject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field\n>\n> Hi Neyman, thank you for your answer.\n> Unfortunately this query runs almost at the same time:\n>\n> Sort (cost=4877693.98..4877702.60 rows=3449 width=20) (actual\n> time=25820.291..25821.845 rows=1640 loops=1)\n> Sort Key: tt.tweet_id\n> Sort Method: quicksort Memory: 97kB\n> Buffers: shared hit=1849 read=32788\n> -> Nested Loop (cost=247.58..4877491.32 rows=3449 width=20) (actual\n> time=486.839..25814.120 rows=1640 loops=1)\n> Buffers: shared hit=1849 read=32788\n> -> Hash Semi Join (cost=229.62..88553.23 rows=1681 width=8)\n> (actual time=431.654..13209.159 rows=597 loops=1)\n> Hash Cond: (t.user_id = relationship.followed_id)\n> Buffers: shared hit=3 read=31870\n> -> Index Scan using tweet_creation_time_index on tweet t\n> (cost=0.57..83308.25 rows=1781234 width=16) (actual\n> time=130.144..10037.764 rows=1759645 loops=1)\n> Index Cond: ((creation_time >= '2013-05-05\n> 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n> 00:00:00-03'::timestamp with time zone))\n> Buffers: shared hit=1 read=31867\n> -> Hash (cost=227.12..227.12 rows=154 width=8) (actual\n> time=94.365..94.365 rows=106 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 3kB\n> Buffers: shared hit=2 read=3\n> -> Index Only Scan using relationship_id on\n> relationship (cost=0.42..227.12 rows=154 width=8) (actual\n> time=74.540..94.101 rows=106 loops=1)\n> Index Cond: (follower_id = 335093362)\n> Heap Fetches: 0\n> Buffers: shared hit=2 read=3\n> -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\n> rows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n> Recheck Cond: (tweet_id = t.id)\n> Buffers: shared hit=1846 read=918\n> -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\n> rows=723 width=0) (actual time=15.012..15.012 rows=3 loops=597)\n> Index Cond: (tweet_id = t.id)\n> Buffers: shared hit=1763 read=632\n> Total runtime: 25823.386 ms\n>\n> I have noticed that in both queries the index scan on\n> tweet_creation_time_index is very expensive. Is there anything I can do to\n> make the planner choose a index only scan?\n>\n> Thank you,\n> Caio\n>\n> Just try the following:\n>\n> SELECT tt.tweet_id, tt.topic, tt.topic_value\n> FROM tweet_topic AS tt JOIN tweet AS t ON (tt.tweet_id = t.id\n> AND t.creation_time\n> BETWEEN 'D1' AND 'D2' AND t.user_id in\n> (SELECT followed_id FROM\n> relationship WHERE follower_id = N))\n> ORDER BY tt.tweet_id;\n>\n> And see if it helps with performance.\n>\n> Regards,\n> Igor Neyman\n>\n> What is your hardware configuration, and Postgres config parameters\n> modified from default values?\n>\n> Regards,\n> Igor Neyman\n>\n\nThese are the parameters I have set in postgresql.conf:work_mem = 128MBshared_buffers = 1GBmaintenance_work_mem = 1536MB\nfsync = offsynchronous_commit = offeffective_cache_size = 2GBThe hardware is a modest one:CPU: Intel(R) Atom(TM) CPU  230   @ 1.60GHzRAM: 2GB\nHD: 1TV 7200 RPM (WDC WD10EZEX-00RKKA0)This machine runs a slackware 14.0 dedicated to the Postgresql. Thank you,Caio\nOn Mon, Nov 4, 2013 at 7:26 PM, Igor Neyman <[email protected]> wrote:\nFrom: Caio Casimiro [mailto:[email protected]]\nSent: Monday, November 04, 2013 4:10 PM\nTo: Igor Neyman\nCc: Jeff Janes; [email protected]\nSubject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field\n\nHi Neyman, thank you for your answer.\nUnfortunately this query runs almost at the same time:\n\nSort  (cost=4877693.98..4877702.60 rows=3449 width=20) (actual time=25820.291..25821.845 rows=1640 loops=1)\n  Sort Key: tt.tweet_id\n  Sort Method: quicksort  Memory: 97kB\n  Buffers: shared hit=1849 read=32788\n  ->  Nested Loop  (cost=247.58..4877491.32 rows=3449 width=20) (actual time=486.839..25814.120 rows=1640 loops=1)\n        Buffers: shared hit=1849 read=32788\n        ->  Hash Semi Join  (cost=229.62..88553.23 rows=1681 width=8) (actual time=431.654..13209.159 rows=597 loops=1)\n              Hash Cond: (t.user_id = relationship.followed_id)\n              Buffers: shared hit=3 read=31870\n              ->  Index Scan using tweet_creation_time_index on tweet t  (cost=0.57..83308.25 rows=1781234 width=16) (actual time=130.144..10037.764 rows=1759645 loops=1)\n                    Index Cond: ((creation_time >= '2013-05-05 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06 00:00:00-03'::timestamp with time zone))\n                    Buffers: shared hit=1 read=31867\n              ->  Hash  (cost=227.12..227.12 rows=154 width=8) (actual time=94.365..94.365 rows=106 loops=1)\n                    Buckets: 1024  Batches: 1  Memory Usage: 3kB\n                    Buffers: shared hit=2 read=3\n                    ->  Index Only Scan using relationship_id on relationship  (cost=0.42..227.12 rows=154 width=8) (actual time=74.540..94.101 rows=106 loops=1)\n                          Index Cond: (follower_id = 335093362)\n                          Heap Fetches: 0\n                          Buffers: shared hit=2 read=3\n        ->  Bitmap Heap Scan on tweet_topic tt  (cost=17.96..2841.63 rows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n              Recheck Cond: (tweet_id = t.id)\n              Buffers: shared hit=1846 read=918\n              ->  Bitmap Index Scan on tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0) (actual time=15.012..15.012 rows=3 loops=597)\n                    Index Cond: (tweet_id = t.id)\n                    Buffers: shared hit=1763 read=632\nTotal runtime: 25823.386 ms\n\nI have noticed that in both queries the index scan on tweet_creation_time_index is very expensive. Is there anything I can do to make the planner choose a index only scan?\n\nThank you,\nCaio\n\nJust try the following:\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt  JOIN tweet AS t ON (tt.tweet_id = t.id\n                                                  AND t.creation_time BETWEEN 'D1' AND 'D2' AND t.user_id in\n                                         (SELECT followed_id FROM relationship WHERE follower_id = N))\n ORDER BY tt.tweet_id;\n\nAnd see if it helps with performance.\n\nRegards,\nIgor Neyman\n\nWhat is your hardware configuration, and Postgres config parameters modified from default values?\n\nRegards,\nIgor Neyman", "msg_date": "Mon, 4 Nov 2013 19:32:45 -0200", "msg_from": "Caio Casimiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "Hi Elliot, thank you for your answer.\n\nI tried this query but it still suffer with index scan on\ntweet_creation_time_index:\n\n\"Sort (cost=4899904.57..4899913.19 rows=3447 width=20) (actual\ntime=37560.938..37562.503 rows=1640 loops=1)\"\n\" Sort Key: tt.tweet_id\"\n\" Sort Method: quicksort Memory: 97kB\"\n\" Buffers: shared hit=1849 read=32788\"\n\" -> Nested Loop (cost=105592.06..4899702.04 rows=3447 width=20) (actual\ntime=19151.036..37555.227 rows=1640 loops=1)\"\n\" Buffers: shared hit=1849 read=32788\"\n\" -> Hash Join (cost=105574.10..116461.68 rows=1679 width=8)\n(actual time=19099.848..19127.606 rows=597 loops=1)\"\n\" Hash Cond: (relationship.followed_id = t.user_id)\"\n\" Buffers: shared hit=3 read=31870\"\n\" -> Index Only Scan using relationship_id on relationship\n (cost=0.42..227.12 rows=154 width=8) (actual time=66.102..89.721 rows=106\nloops=1)\"\n\" Index Cond: (follower_id = 335093362)\"\n\" Heap Fetches: 0\"\n\" Buffers: shared hit=2 read=3\"\n\" -> Hash (cost=83308.25..83308.25 rows=1781234 width=16)\n(actual time=19031.916..19031.916 rows=1759645 loops=1)\"\n\" Buckets: 262144 Batches: 1 Memory Usage: 61863kB\"\n\" Buffers: shared hit=1 read=31867\"\n\" -> Index Scan using tweet_creation_time_index on\ntweet t (cost=0.57..83308.25 rows=1781234 width=16) (actual\ntime=48.595..13759.768 rows=1759645 loops=1)\"\n\" Index Cond: ((creation_time >= '2013-05-05\n00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n00:00:00-03'::timestamp with time zone))\"\n\" Buffers: shared hit=1 read=31867\"\n\" -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\nrows=723 width=20) (actual time=30.774..30.847 rows=3 loops=597)\"\n\" Recheck Cond: (tweet_id = t.id)\"\n\" Buffers: shared hit=1846 read=918\"\n\" -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\nrows=723 width=0) (actual time=23.084..23.084 rows=3 loops=597)\"\n\" Index Cond: (tweet_id = t.id)\"\n\" Buffers: shared hit=1763 read=632\"\n\nYou said that I would need B-Tree indexes on the fields that I want the\nplanner to use index only scan, and I think I have them already on the\ntweet table:\n\n\"tweet_ios_index\" btree (id, user_id, creation_time)\n\nShouldn't the tweet_ios_index be enough to make the scan over\ntweet_creation_time_index be a index only scan? And, more important, would\nit be really faster?\n\nThank you very much,\nCaio\n\n\nOn Mon, Nov 4, 2013 at 7:22 PM, Elliot <[email protected]> wrote:\n\n> On 2013-11-04 16:10, Caio Casimiro wrote:\n>\n> Hi Neyman, thank you for your answer.\n>\n> Unfortunately this query runs almost at the same time:\n>\n> Sort (cost=4877693.98..4877702.60 rows=3449 width=20) (actual\n> time=25820.291..25821.845 rows=1640 loops=1)\n> Sort Key: tt.tweet_id\n> Sort Method: quicksort Memory: 97kB\n> Buffers: shared hit=1849 read=32788\n> -> Nested Loop (cost=247.58..4877491.32 rows=3449 width=20) (actual\n> time=486.839..25814.120 rows=1640 loops=1)\n> Buffers: shared hit=1849 read=32788\n> -> Hash Semi Join (cost=229.62..88553.23 rows=1681 width=8)\n> (actual time=431.654..13209.159 rows=597 loops=1)\n> Hash Cond: (t.user_id = relationship.followed_id)\n> Buffers: shared hit=3 read=31870\n> -> Index Scan using tweet_creation_time_index on tweet t\n> (cost=0.57..83308.25 rows=1781234 width=16) (actual\n> time=130.144..10037.764 rows=1759645 loops=1)\n> Index Cond: ((creation_time >= '2013-05-05\n> 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n> 00:00:00-03'::timestamp with time zone))\n> Buffers: shared hit=1 read=31867\n> -> Hash (cost=227.12..227.12 rows=154 width=8) (actual\n> time=94.365..94.365 rows=106 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 3kB\n> Buffers: shared hit=2 read=3\n> -> Index Only Scan using relationship_id on\n> relationship (cost=0.42..227.12 rows=154 width=8) (actual\n> time=74.540..94.101 rows=106 loops=1)\n> Index Cond: (follower_id = 335093362)\n> Heap Fetches: 0\n> Buffers: shared hit=2 read=3\n> -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\n> rows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n> Recheck Cond: (tweet_id = t.id)\n> Buffers: shared hit=1846 read=918\n> -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\n> rows=723 width=0) (actual time=15.012..15.012 rows=3 loops=597)\n> Index Cond: (tweet_id = t.id)\n> Buffers: shared hit=1763 read=632\n> Total runtime: 25823.386 ms\n>\n> I have noticed that in both queries the index scan on\n> tweet_creation_time_index is very expensive. Is there anything I can do to\n> make the planner choose a index only scan?\n>\n>\n> Yes, because that part of the query is kicking back so many rows, many\n> of which are totally unnecessary anyway - you're first getting all the\n> tweets in a particular time range, then limiting them down to just users\n> that are followed. Here's clarification on the approach I mentioned\n> earlier. All you should really need are basic (btree) indexes on your\n> different keys (tweet_topic.tweet_id, tweet.id, tweet.user_id,\n> relationship.follower_id, relationship.followed_id). I also changed the\n> left join to an inner join as somebody pointed out that your logic amounted\n> to reducing the match to an inner join anyway.\n>\n> SELECT tt.tweet_id, tt.topic, tt.topic_value\n> FROM tweet_topic AS tt\n> JOIN tweet AS t\n> ON tt.tweet_id = t.id\n> join relationship\n> on t.user_id = relationship.followed_id\n>\n> WHERE creation_time BETWEEN 'D1' AND 'D2'\n> AND relationship.follower_id = N\n> ORDER BY tt.tweet_id\n> ;\n>\n>\n\nHi Elliot, thank you for your answer.I tried this query but it still suffer with index scan on tweet_creation_time_index:\"Sort  (cost=4899904.57..4899913.19 rows=3447 width=20) (actual time=37560.938..37562.503 rows=1640 loops=1)\"\n\"  Sort Key: tt.tweet_id\"\"  Sort Method: quicksort  Memory: 97kB\"\"  Buffers: shared hit=1849 read=32788\"\"  ->  Nested Loop  (cost=105592.06..4899702.04 rows=3447 width=20) (actual time=19151.036..37555.227 rows=1640 loops=1)\"\n\"        Buffers: shared hit=1849 read=32788\"\"        ->  Hash Join  (cost=105574.10..116461.68 rows=1679 width=8) (actual time=19099.848..19127.606 rows=597 loops=1)\"\"              Hash Cond: (relationship.followed_id = t.user_id)\"\n\"              Buffers: shared hit=3 read=31870\"\"              ->  Index Only Scan using relationship_id on relationship  (cost=0.42..227.12 rows=154 width=8) (actual time=66.102..89.721 rows=106 loops=1)\"\n\"                    Index Cond: (follower_id = 335093362)\"\"                    Heap Fetches: 0\"\"                    Buffers: shared hit=2 read=3\"\"              ->  Hash  (cost=83308.25..83308.25 rows=1781234 width=16) (actual time=19031.916..19031.916 rows=1759645 loops=1)\"\n\"                    Buckets: 262144  Batches: 1  Memory Usage: 61863kB\"\"                    Buffers: shared hit=1 read=31867\"\"                    ->  Index Scan using tweet_creation_time_index on tweet t  (cost=0.57..83308.25 rows=1781234 width=16) (actual time=48.595..13759.768 rows=1759645 loops=1)\"\n\"                          Index Cond: ((creation_time >= '2013-05-05 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06 00:00:00-03'::timestamp with time zone))\"\n\"                          Buffers: shared hit=1 read=31867\"\"        ->  Bitmap Heap Scan on tweet_topic tt  (cost=17.96..2841.63 rows=723 width=20) (actual time=30.774..30.847 rows=3 loops=597)\"\n\"              Recheck Cond: (tweet_id = t.id)\"\"              Buffers: shared hit=1846 read=918\"\"              ->  Bitmap Index Scan on tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0) (actual time=23.084..23.084 rows=3 loops=597)\"\n\"                    Index Cond: (tweet_id = t.id)\"\"                    Buffers: shared hit=1763 read=632\"You said that I would need B-Tree indexes on the fields that I want the planner to use index only scan, and I think I have them already on the tweet table:\n\"tweet_ios_index\" btree (id, user_id, creation_time)Shouldn't the tweet_ios_index be enough to make the scan over tweet_creation_time_index be a index only scan? And, more important, would it be really faster?\nThank you very much,CaioOn Mon, Nov 4, 2013 at 7:22 PM, Elliot <[email protected]> wrote:\n\n\nOn 2013-11-04 16:10, Caio Casimiro\n wrote:\n\n\nHi Neyman, thank you for your answer.\n\nUnfortunately this query runs almost at the same time:\n\n\n\n\nSort  (cost=4877693.98..4877702.60 rows=3449 width=20)\n (actual time=25820.291..25821.845 rows=1640 loops=1)\n  Sort Key: tt.tweet_id\n  Sort Method: quicksort  Memory: 97kB\n  Buffers: shared hit=1849 read=32788\n  ->  Nested Loop  (cost=247.58..4877491.32\n rows=3449 width=20) (actual time=486.839..25814.120\n rows=1640 loops=1)\n        Buffers: shared hit=1849 read=32788\n        ->  Hash Semi Join  (cost=229.62..88553.23\n rows=1681 width=8) (actual time=431.654..13209.159\n rows=597 loops=1)\n              Hash Cond: (t.user_id =\n relationship.followed_id)\n              Buffers: shared hit=3 read=31870\n              ->  Index Scan using\n tweet_creation_time_index on tweet t  (cost=0.57..83308.25\n rows=1781234 width=16) (actual time=130.144..10037.764\n rows=1759645 loops=1)\n                    Index Cond: ((creation_time >=\n '2013-05-05 00:00:00-03'::timestamp with time zone) AND\n (creation_time <= '2013-05-06 00:00:00-03'::timestamp\n with time zone))\n                    Buffers: shared hit=1 read=31867\n              ->  Hash  (cost=227.12..227.12\n rows=154 width=8) (actual time=94.365..94.365 rows=106\n loops=1)\n                    Buckets: 1024  Batches: 1  Memory\n Usage: 3kB\n                    Buffers: shared hit=2 read=3\n                    ->  Index Only Scan using\n relationship_id on relationship  (cost=0.42..227.12\n rows=154 width=8) (actual time=74.540..94.101 rows=106\n loops=1)\n                          Index Cond: (follower_id =\n 335093362)\n                          Heap Fetches: 0\n                          Buffers: shared hit=2 read=3\n        ->  Bitmap Heap Scan on tweet_topic tt\n  (cost=17.96..2841.63 rows=723 width=20) (actual\n time=21.014..21.085 rows=3 loops=597)\n              Recheck Cond: (tweet_id = t.id)\n              Buffers: shared hit=1846 read=918\n              ->  Bitmap Index Scan on\n tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0)\n (actual time=15.012..15.012 rows=3 loops=597)\n                    Index Cond: (tweet_id = t.id)\n                    Buffers: shared hit=1763 read=632\nTotal runtime: 25823.386 ms\n\n\n\n\nI have noticed that in both queries the index scan on\n tweet_creation_time_index is very expensive. Is there anything\n I can do to make the planner choose a index only scan?\n\n\n\n\n\n Yes, because that part of the query is kicking back so many rows,\n many of which are totally unnecessary anyway - you're first getting\n all the tweets in a particular time range, then limiting them down\n to just users that are followed. Here's clarification on the\n approach I mentioned earlier. All you should really need are basic\n (btree) indexes on your different keys (tweet_topic.tweet_id,\n tweet.id, tweet.user_id, relationship.follower_id,\n relationship.followed_id). I also changed the left join to an inner\n join as somebody pointed out that your logic amounted to reducing\n the match to an inner join anyway. \n\n SELECT tt.tweet_id, tt.topic, tt.topic_value\n FROM tweet_topic AS tt\n   JOIN tweet AS t\n     ON tt.tweet_id = t.id\n   join relationship\n     on t.user_id = relationship.followed_id\n WHERE creation_time BETWEEN 'D1' AND 'D2'\n   AND relationship.follower_id = N\n ORDER BY tt.tweet_id\n ;", "msg_date": "Mon, 4 Nov 2013 20:10:21 -0200", "msg_from": "Caio Casimiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "Hello,\n I think you could try with an index on tweet table columns\n\"user_id, creation_time\" [in this order , because the first argument is for\nthe equality predicate and the second with the range scan predicate, the\nindex tweet_user_id_creation_time_index is not ok because it has the\nreverse order ] so the Hash Join between relationship and tweet will\nbecome in theory a netsted loop and so the filter relationship.followed_id\n= t.user_id will be pushed on the new index search condition with also\nthe creation_time > .. and creation_time < ... . In this manner you will\nreduce the random i/o of the scanning of 1759645 rows from tweet that are\nfilter later now in hash join to 1679.\n\nI hope it will work, if not, I hope you could attach the DDL of the table (\nwith constraints and indexes) to better understand the problem.\n\nBye\n\n\n2013/11/4 Caio Casimiro <[email protected]>\n\n> Hi Elliot, thank you for your answer.\n>\n> I tried this query but it still suffer with index scan on\n> tweet_creation_time_index:\n>\n> \"Sort (cost=4899904.57..4899913.19 rows=3447 width=20) (actual\n> time=37560.938..37562.503 rows=1640 loops=1)\"\n> \" Sort Key: tt.tweet_id\"\n> \" Sort Method: quicksort Memory: 97kB\"\n> \" Buffers: shared hit=1849 read=32788\"\n> \" -> Nested Loop (cost=105592.06..4899702.04 rows=3447 width=20)\n> (actual time=19151.036..37555.227 rows=1640 loops=1)\"\n> \" Buffers: shared hit=1849 read=32788\"\n> \" -> Hash Join (cost=105574.10..116461.68 rows=1679 width=8)\n> (actual time=19099.848..19127.606 rows=597 loops=1)\"\n> \" Hash Cond: (relationship.followed_id = t.user_id)\"\n> \" Buffers: shared hit=3 read=31870\"\n> \" -> Index Only Scan using relationship_id on relationship\n> (cost=0.42..227.12 rows=154 width=8) (actual time=66.102..89.721\n> rows=106 loops=1)\"\n> \" Index Cond: (follower_id = 335093362)\"\n> \" Heap Fetches: 0\"\n> \" Buffers: shared hit=2 read=3\"\n> \" -> Hash (cost=83308.25..83308.25 rows=1781234 width=16)\n> (actual time=19031.916..19031.916 rows=1759645 loops=1)\"\n> \" Buckets: 262144 Batches: 1 Memory Usage: 61863kB\"\n> \" Buffers: shared hit=1 read=31867\"\n> \" -> Index Scan using tweet_creation_time_index on\n> tweet t (cost=0.57..83308.25 rows=1781234 width=16) (actual\n> time=48.595..13759.768 rows=1759645 loops=1)\"\n> \" Index Cond: ((creation_time >= '2013-05-05\n> 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n> 00:00:00-03'::timestamp with time zone))\"\n> \" Buffers: shared hit=1 read=31867\"\n> \" -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\n> rows=723 width=20) (actual time=30.774..30.847 rows=3 loops=597)\"\n> \" Recheck Cond: (tweet_id = t.id)\"\n> \" Buffers: shared hit=1846 read=918\"\n> \" -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\n> rows=723 width=0) (actual time=23.084..23.084 rows=3 loops=597)\"\n> \" Index Cond: (tweet_id = t.id)\"\n> \" Buffers: shared hit=1763 read=632\"\n>\n> You said that I would need B-Tree indexes on the fields that I want the\n> planner to use index only scan, and I think I have them already on the\n> tweet table:\n>\n> \"tweet_ios_index\" btree (id, user_id, creation_time)\n>\n> Shouldn't the tweet_ios_index be enough to make the scan over\n> tweet_creation_time_index be a index only scan? And, more important, would\n> it be really faster?\n>\n> Thank you very much,\n> Caio\n>\n>\n> On Mon, Nov 4, 2013 at 7:22 PM, Elliot <[email protected]> wrote:\n>\n>> On 2013-11-04 16:10, Caio Casimiro wrote:\n>>\n>> Hi Neyman, thank you for your answer.\n>>\n>> Unfortunately this query runs almost at the same time:\n>>\n>> Sort (cost=4877693.98..4877702.60 rows=3449 width=20) (actual\n>> time=25820.291..25821.845 rows=1640 loops=1)\n>> Sort Key: tt.tweet_id\n>> Sort Method: quicksort Memory: 97kB\n>> Buffers: shared hit=1849 read=32788\n>> -> Nested Loop (cost=247.58..4877491.32 rows=3449 width=20) (actual\n>> time=486.839..25814.120 rows=1640 loops=1)\n>> Buffers: shared hit=1849 read=32788\n>> -> Hash Semi Join (cost=229.62..88553.23 rows=1681 width=8)\n>> (actual time=431.654..13209.159 rows=597 loops=1)\n>> Hash Cond: (t.user_id = relationship.followed_id)\n>> Buffers: shared hit=3 read=31870\n>> -> Index Scan using tweet_creation_time_index on tweet t\n>> (cost=0.57..83308.25 rows=1781234 width=16) (actual\n>> time=130.144..10037.764 rows=1759645 loops=1)\n>> Index Cond: ((creation_time >= '2013-05-05\n>> 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n>> 00:00:00-03'::timestamp with time zone))\n>> Buffers: shared hit=1 read=31867\n>> -> Hash (cost=227.12..227.12 rows=154 width=8) (actual\n>> time=94.365..94.365 rows=106 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 3kB\n>> Buffers: shared hit=2 read=3\n>> -> Index Only Scan using relationship_id on\n>> relationship (cost=0.42..227.12 rows=154 width=8) (actual\n>> time=74.540..94.101 rows=106 loops=1)\n>> Index Cond: (follower_id = 335093362)\n>> Heap Fetches: 0\n>> Buffers: shared hit=2 read=3\n>> -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\n>> rows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n>> Recheck Cond: (tweet_id = t.id)\n>> Buffers: shared hit=1846 read=918\n>> -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\n>> rows=723 width=0) (actual time=15.012..15.012 rows=3 loops=597)\n>> Index Cond: (tweet_id = t.id)\n>> Buffers: shared hit=1763 read=632\n>> Total runtime: 25823.386 ms\n>>\n>> I have noticed that in both queries the index scan on\n>> tweet_creation_time_index is very expensive. Is there anything I can do to\n>> make the planner choose a index only scan?\n>>\n>>\n>> Yes, because that part of the query is kicking back so many rows, many\n>> of which are totally unnecessary anyway - you're first getting all the\n>> tweets in a particular time range, then limiting them down to just users\n>> that are followed. Here's clarification on the approach I mentioned\n>> earlier. All you should really need are basic (btree) indexes on your\n>> different keys (tweet_topic.tweet_id, tweet.id, tweet.user_id,\n>> relationship.follower_id, relationship.followed_id). I also changed the\n>> left join to an inner join as somebody pointed out that your logic amounted\n>> to reducing the match to an inner join anyway.\n>>\n>> SELECT tt.tweet_id, tt.topic, tt.topic_value\n>> FROM tweet_topic AS tt\n>> JOIN tweet AS t\n>> ON tt.tweet_id = t.id\n>> join relationship\n>> on t.user_id = relationship.followed_id\n>>\n>> WHERE creation_time BETWEEN 'D1' AND 'D2'\n>> AND relationship.follower_id = N\n>> ORDER BY tt.tweet_id\n>> ;\n>>\n>>\n>\n\nHello,              I think you could try with an index on tweet table columns \"user_id, creation_time\" [in this order , because the first argument is for the equality predicate and the second with the range scan predicate, the index tweet_user_id_creation_time_index is not ok because it has the reverse order ]  so the Hash Join between relationship and tweet   will become in theory a netsted loop and so the filter relationship.followed_id = t.user_id   will be pushed on the new index search condition with also the creation_time > .. and creation_time < ... . In this manner you will reduce the random i/o of the scanning of 1759645 rows from tweet that are filter later now in hash join to 1679.\nI hope it will work, if not, I hope you could attach the DDL of the table ( with constraints and indexes) to better understand the problem.Bye\n2013/11/4 Caio Casimiro <[email protected]>\nHi Elliot, thank you for your answer.I tried this query but it still suffer with index scan on tweet_creation_time_index:\"Sort  (cost=4899904.57..4899913.19 rows=3447 width=20) (actual time=37560.938..37562.503 rows=1640 loops=1)\"\n\n\"  Sort Key: tt.tweet_id\"\"  Sort Method: quicksort  Memory: 97kB\"\"  Buffers: shared hit=1849 read=32788\"\"  ->  Nested Loop  (cost=105592.06..4899702.04 rows=3447 width=20) (actual time=19151.036..37555.227 rows=1640 loops=1)\"\n\"        Buffers: shared hit=1849 read=32788\"\"        ->  Hash Join  (cost=105574.10..116461.68 rows=1679 width=8) (actual time=19099.848..19127.606 rows=597 loops=1)\"\"              Hash Cond: (relationship.followed_id = t.user_id)\"\n\"              Buffers: shared hit=3 read=31870\"\"              ->  Index Only Scan using relationship_id on relationship  (cost=0.42..227.12 rows=154 width=8) (actual time=66.102..89.721 rows=106 loops=1)\"\n\n\"                    Index Cond: (follower_id = 335093362)\"\"                    Heap Fetches: 0\"\"                    Buffers: shared hit=2 read=3\"\n\"              ->  Hash  (cost=83308.25..83308.25 rows=1781234 width=16) (actual time=19031.916..19031.916 rows=1759645 loops=1)\"\n\"                    Buckets: 262144  Batches: 1  Memory Usage: 61863kB\"\"                    Buffers: shared hit=1 read=31867\"\"                    ->  Index Scan using tweet_creation_time_index on tweet t  (cost=0.57..83308.25 rows=1781234 width=16) (actual time=48.595..13759.768 rows=1759645 loops=1)\"\n\n\"                          Index Cond: ((creation_time >= '2013-05-05 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06 00:00:00-03'::timestamp with time zone))\"\n\"                          Buffers: shared hit=1 read=31867\"\"        ->  Bitmap Heap Scan on tweet_topic tt  (cost=17.96..2841.63 rows=723 width=20) (actual time=30.774..30.847 rows=3 loops=597)\"\n\n\"              Recheck Cond: (tweet_id = t.id)\"\"              Buffers: shared hit=1846 read=918\"\"              ->  Bitmap Index Scan on tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0) (actual time=23.084..23.084 rows=3 loops=597)\"\n\n\"                    Index Cond: (tweet_id = t.id)\"\"                    Buffers: shared hit=1763 read=632\"\nYou said that I would need B-Tree indexes on the fields that I want the planner to use index only scan, and I think I have them already on the tweet table:\n\"tweet_ios_index\" btree (id, user_id, creation_time)Shouldn't the tweet_ios_index be enough to make the scan over tweet_creation_time_index be a index only scan? And, more important, would it be really faster?\nThank you very much,CaioOn Mon, Nov 4, 2013 at 7:22 PM, Elliot <[email protected]> wrote:\n\n\nOn 2013-11-04 16:10, Caio Casimiro\n wrote:\n\n\nHi Neyman, thank you for your answer.\n\nUnfortunately this query runs almost at the same time:\n\n\n\n\nSort  (cost=4877693.98..4877702.60 rows=3449 width=20)\n (actual time=25820.291..25821.845 rows=1640 loops=1)\n  Sort Key: tt.tweet_id\n  Sort Method: quicksort  Memory: 97kB\n  Buffers: shared hit=1849 read=32788\n  ->  Nested Loop  (cost=247.58..4877491.32\n rows=3449 width=20) (actual time=486.839..25814.120\n rows=1640 loops=1)\n        Buffers: shared hit=1849 read=32788\n        ->  Hash Semi Join  (cost=229.62..88553.23\n rows=1681 width=8) (actual time=431.654..13209.159\n rows=597 loops=1)\n              Hash Cond: (t.user_id =\n relationship.followed_id)\n              Buffers: shared hit=3 read=31870\n              ->  Index Scan using\n tweet_creation_time_index on tweet t  (cost=0.57..83308.25\n rows=1781234 width=16) (actual time=130.144..10037.764\n rows=1759645 loops=1)\n                    Index Cond: ((creation_time >=\n '2013-05-05 00:00:00-03'::timestamp with time zone) AND\n (creation_time <= '2013-05-06 00:00:00-03'::timestamp\n with time zone))\n                    Buffers: shared hit=1 read=31867\n              ->  Hash  (cost=227.12..227.12\n rows=154 width=8) (actual time=94.365..94.365 rows=106\n loops=1)\n                    Buckets: 1024  Batches: 1  Memory\n Usage: 3kB\n                    Buffers: shared hit=2 read=3\n                    ->  Index Only Scan using\n relationship_id on relationship  (cost=0.42..227.12\n rows=154 width=8) (actual time=74.540..94.101 rows=106\n loops=1)\n                          Index Cond: (follower_id =\n 335093362)\n                          Heap Fetches: 0\n                          Buffers: shared hit=2 read=3\n        ->  Bitmap Heap Scan on tweet_topic tt\n  (cost=17.96..2841.63 rows=723 width=20) (actual\n time=21.014..21.085 rows=3 loops=597)\n              Recheck Cond: (tweet_id = t.id)\n              Buffers: shared hit=1846 read=918\n              ->  Bitmap Index Scan on\n tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0)\n (actual time=15.012..15.012 rows=3 loops=597)\n                    Index Cond: (tweet_id = t.id)\n                    Buffers: shared hit=1763 read=632\nTotal runtime: 25823.386 ms\n\n\n\n\nI have noticed that in both queries the index scan on\n tweet_creation_time_index is very expensive. Is there anything\n I can do to make the planner choose a index only scan?\n\n\n\n\n\n Yes, because that part of the query is kicking back so many rows,\n many of which are totally unnecessary anyway - you're first getting\n all the tweets in a particular time range, then limiting them down\n to just users that are followed. Here's clarification on the\n approach I mentioned earlier. All you should really need are basic\n (btree) indexes on your different keys (tweet_topic.tweet_id,\n tweet.id, tweet.user_id, relationship.follower_id,\n relationship.followed_id). I also changed the left join to an inner\n join as somebody pointed out that your logic amounted to reducing\n the match to an inner join anyway. \n\n SELECT tt.tweet_id, tt.topic, tt.topic_value\n FROM tweet_topic AS tt\n   JOIN tweet AS t\n     ON tt.tweet_id = t.id\n   join relationship\n     on t.user_id = relationship.followed_id\n WHERE creation_time BETWEEN 'D1' AND 'D2'\n   AND relationship.follower_id = N\n ORDER BY tt.tweet_id\n ;", "msg_date": "Mon, 4 Nov 2013 23:59:27 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "Hello, thank you for your answer. I will give it a try and then I post here\nthe results.\nIn the original email I post the output of \\d+ tweet, which contains the\nindexes and constraints.\n\nBest regards,\nCaio\n\n\nOn Mon, Nov 4, 2013 at 8:59 PM, desmodemone <[email protected]> wrote:\n\n> Hello,\n> I think you could try with an index on tweet table columns\n> \"user_id, creation_time\" [in this order , because the first argument is for\n> the equality predicate and the second with the range scan predicate, the\n> index tweet_user_id_creation_time_index is not ok because it has the\n> reverse order ] so the Hash Join between relationship and tweet will\n> become in theory a netsted loop and so the filter relationship.followed_id\n> = t.user_id will be pushed on the new index search condition with also\n> the creation_time > .. and creation_time < ... . In this manner you will\n> reduce the random i/o of the scanning of 1759645 rows from tweet that are\n> filter later now in hash join to 1679.\n>\n> I hope it will work, if not, I hope you could attach the DDL of the table\n> ( with constraints and indexes) to better understand the problem.\n>\n> Bye\n>\n>\n> 2013/11/4 Caio Casimiro <[email protected]>\n>\n>> Hi Elliot, thank you for your answer.\n>>\n>> I tried this query but it still suffer with index scan on\n>> tweet_creation_time_index:\n>>\n>> \"Sort (cost=4899904.57..4899913.19 rows=3447 width=20) (actual\n>> time=37560.938..37562.503 rows=1640 loops=1)\"\n>> \" Sort Key: tt.tweet_id\"\n>> \" Sort Method: quicksort Memory: 97kB\"\n>> \" Buffers: shared hit=1849 read=32788\"\n>> \" -> Nested Loop (cost=105592.06..4899702.04 rows=3447 width=20)\n>> (actual time=19151.036..37555.227 rows=1640 loops=1)\"\n>> \" Buffers: shared hit=1849 read=32788\"\n>> \" -> Hash Join (cost=105574.10..116461.68 rows=1679 width=8)\n>> (actual time=19099.848..19127.606 rows=597 loops=1)\"\n>> \" Hash Cond: (relationship.followed_id = t.user_id)\"\n>> \" Buffers: shared hit=3 read=31870\"\n>> \" -> Index Only Scan using relationship_id on relationship\n>> (cost=0.42..227.12 rows=154 width=8) (actual time=66.102..89.721\n>> rows=106 loops=1)\"\n>> \" Index Cond: (follower_id = 335093362)\"\n>> \" Heap Fetches: 0\"\n>> \" Buffers: shared hit=2 read=3\"\n>> \" -> Hash (cost=83308.25..83308.25 rows=1781234 width=16)\n>> (actual time=19031.916..19031.916 rows=1759645 loops=1)\"\n>> \" Buckets: 262144 Batches: 1 Memory Usage: 61863kB\"\n>> \" Buffers: shared hit=1 read=31867\"\n>> \" -> Index Scan using tweet_creation_time_index on\n>> tweet t (cost=0.57..83308.25 rows=1781234 width=16) (actual\n>> time=48.595..13759.768 rows=1759645 loops=1)\"\n>> \" Index Cond: ((creation_time >= '2013-05-05\n>> 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n>> 00:00:00-03'::timestamp with time zone))\"\n>> \" Buffers: shared hit=1 read=31867\"\n>> \" -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\n>> rows=723 width=20) (actual time=30.774..30.847 rows=3 loops=597)\"\n>> \" Recheck Cond: (tweet_id = t.id)\"\n>> \" Buffers: shared hit=1846 read=918\"\n>> \" -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\n>> rows=723 width=0) (actual time=23.084..23.084 rows=3 loops=597)\"\n>> \" Index Cond: (tweet_id = t.id)\"\n>> \" Buffers: shared hit=1763 read=632\"\n>>\n>> You said that I would need B-Tree indexes on the fields that I want the\n>> planner to use index only scan, and I think I have them already on the\n>> tweet table:\n>>\n>> \"tweet_ios_index\" btree (id, user_id, creation_time)\n>>\n>> Shouldn't the tweet_ios_index be enough to make the scan over\n>> tweet_creation_time_index be a index only scan? And, more important, would\n>> it be really faster?\n>>\n>> Thank you very much,\n>> Caio\n>>\n>>\n>> On Mon, Nov 4, 2013 at 7:22 PM, Elliot <[email protected]>wrote:\n>>\n>>> On 2013-11-04 16:10, Caio Casimiro wrote:\n>>>\n>>> Hi Neyman, thank you for your answer.\n>>>\n>>> Unfortunately this query runs almost at the same time:\n>>>\n>>> Sort (cost=4877693.98..4877702.60 rows=3449 width=20) (actual\n>>> time=25820.291..25821.845 rows=1640 loops=1)\n>>> Sort Key: tt.tweet_id\n>>> Sort Method: quicksort Memory: 97kB\n>>> Buffers: shared hit=1849 read=32788\n>>> -> Nested Loop (cost=247.58..4877491.32 rows=3449 width=20) (actual\n>>> time=486.839..25814.120 rows=1640 loops=1)\n>>> Buffers: shared hit=1849 read=32788\n>>> -> Hash Semi Join (cost=229.62..88553.23 rows=1681 width=8)\n>>> (actual time=431.654..13209.159 rows=597 loops=1)\n>>> Hash Cond: (t.user_id = relationship.followed_id)\n>>> Buffers: shared hit=3 read=31870\n>>> -> Index Scan using tweet_creation_time_index on tweet t\n>>> (cost=0.57..83308.25 rows=1781234 width=16) (actual\n>>> time=130.144..10037.764 rows=1759645 loops=1)\n>>> Index Cond: ((creation_time >= '2013-05-05\n>>> 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06\n>>> 00:00:00-03'::timestamp with time zone))\n>>> Buffers: shared hit=1 read=31867\n>>> -> Hash (cost=227.12..227.12 rows=154 width=8) (actual\n>>> time=94.365..94.365 rows=106 loops=1)\n>>> Buckets: 1024 Batches: 1 Memory Usage: 3kB\n>>> Buffers: shared hit=2 read=3\n>>> -> Index Only Scan using relationship_id on\n>>> relationship (cost=0.42..227.12 rows=154 width=8) (actual\n>>> time=74.540..94.101 rows=106 loops=1)\n>>> Index Cond: (follower_id = 335093362)\n>>> Heap Fetches: 0\n>>> Buffers: shared hit=2 read=3\n>>> -> Bitmap Heap Scan on tweet_topic tt (cost=17.96..2841.63\n>>> rows=723 width=20) (actual time=21.014..21.085 rows=3 loops=597)\n>>> Recheck Cond: (tweet_id = t.id)\n>>> Buffers: shared hit=1846 read=918\n>>> -> Bitmap Index Scan on tweet_topic_pk (cost=0.00..17.78\n>>> rows=723 width=0) (actual time=15.012..15.012 rows=3 loops=597)\n>>> Index Cond: (tweet_id = t.id)\n>>> Buffers: shared hit=1763 read=632\n>>> Total runtime: 25823.386 ms\n>>>\n>>> I have noticed that in both queries the index scan on\n>>> tweet_creation_time_index is very expensive. Is there anything I can do to\n>>> make the planner choose a index only scan?\n>>>\n>>>\n>>> Yes, because that part of the query is kicking back so many rows, many\n>>> of which are totally unnecessary anyway - you're first getting all the\n>>> tweets in a particular time range, then limiting them down to just users\n>>> that are followed. Here's clarification on the approach I mentioned\n>>> earlier. All you should really need are basic (btree) indexes on your\n>>> different keys (tweet_topic.tweet_id, tweet.id, tweet.user_id,\n>>> relationship.follower_id, relationship.followed_id). I also changed the\n>>> left join to an inner join as somebody pointed out that your logic amounted\n>>> to reducing the match to an inner join anyway.\n>>>\n>>> SELECT tt.tweet_id, tt.topic, tt.topic_value\n>>> FROM tweet_topic AS tt\n>>> JOIN tweet AS t\n>>> ON tt.tweet_id = t.id\n>>> join relationship\n>>> on t.user_id = relationship.followed_id\n>>>\n>>> WHERE creation_time BETWEEN 'D1' AND 'D2'\n>>> AND relationship.follower_id = N\n>>> ORDER BY tt.tweet_id\n>>> ;\n>>>\n>>>\n>>\n>\n\nHello, thank you for your answer. I will give it a try and then I post here the results.In the original email I post the output of \\d+ tweet, which contains the indexes and constraints.\nBest regards,CaioOn Mon, Nov 4, 2013 at 8:59 PM, desmodemone <[email protected]> wrote:\nHello,              I think you could try with an index on tweet table columns \"user_id, creation_time\" [in this order , because the first argument is for the equality predicate and the second with the range scan predicate, the index tweet_user_id_creation_time_index is not ok because it has the reverse order ]  so the Hash Join between relationship and tweet   will become in theory a netsted loop and so the filter relationship.followed_id = t.user_id   will be pushed on the new index search condition with also the creation_time > .. and creation_time < ... . In this manner you will reduce the random i/o of the scanning of 1759645 rows from tweet that are filter later now in hash join to 1679.\nI hope it will work, if not, I hope you could attach the DDL of the table ( with constraints and indexes) to better understand the problem.Bye\n\n2013/11/4 Caio Casimiro <[email protected]>\nHi Elliot, thank you for your answer.I tried this query but it still suffer with index scan on tweet_creation_time_index:\"Sort  (cost=4899904.57..4899913.19 rows=3447 width=20) (actual time=37560.938..37562.503 rows=1640 loops=1)\"\n\n\"  Sort Key: tt.tweet_id\"\"  Sort Method: quicksort  Memory: 97kB\"\"  Buffers: shared hit=1849 read=32788\"\"  ->  Nested Loop  (cost=105592.06..4899702.04 rows=3447 width=20) (actual time=19151.036..37555.227 rows=1640 loops=1)\"\n\"        Buffers: shared hit=1849 read=32788\"\"        ->  Hash Join  (cost=105574.10..116461.68 rows=1679 width=8) (actual time=19099.848..19127.606 rows=597 loops=1)\"\"              Hash Cond: (relationship.followed_id = t.user_id)\"\n\"              Buffers: shared hit=3 read=31870\"\"              ->  Index Only Scan using relationship_id on relationship  (cost=0.42..227.12 rows=154 width=8) (actual time=66.102..89.721 rows=106 loops=1)\"\n\n\"                    Index Cond: (follower_id = 335093362)\"\"                    Heap Fetches: 0\"\"                    Buffers: shared hit=2 read=3\"\n\"              ->  Hash  (cost=83308.25..83308.25 rows=1781234 width=16) (actual time=19031.916..19031.916 rows=1759645 loops=1)\"\n\"                    Buckets: 262144  Batches: 1  Memory Usage: 61863kB\"\"                    Buffers: shared hit=1 read=31867\"\"                    ->  Index Scan using tweet_creation_time_index on tweet t  (cost=0.57..83308.25 rows=1781234 width=16) (actual time=48.595..13759.768 rows=1759645 loops=1)\"\n\n\"                          Index Cond: ((creation_time >= '2013-05-05 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-05-06 00:00:00-03'::timestamp with time zone))\"\n\"                          Buffers: shared hit=1 read=31867\"\"        ->  Bitmap Heap Scan on tweet_topic tt  (cost=17.96..2841.63 rows=723 width=20) (actual time=30.774..30.847 rows=3 loops=597)\"\n\n\"              Recheck Cond: (tweet_id = t.id)\"\"              Buffers: shared hit=1846 read=918\"\"              ->  Bitmap Index Scan on tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0) (actual time=23.084..23.084 rows=3 loops=597)\"\n\n\"                    Index Cond: (tweet_id = t.id)\"\"                    Buffers: shared hit=1763 read=632\"\n\nYou said that I would need B-Tree indexes on the fields that I want the planner to use index only scan, and I think I have them already on the tweet table:\n\"tweet_ios_index\" btree (id, user_id, creation_time)Shouldn't the tweet_ios_index be enough to make the scan over tweet_creation_time_index be a index only scan? And, more important, would it be really faster?\nThank you very much,CaioOn Mon, Nov 4, 2013 at 7:22 PM, Elliot <[email protected]> wrote:\n\n\nOn 2013-11-04 16:10, Caio Casimiro\n wrote:\n\n\nHi Neyman, thank you for your answer.\n\nUnfortunately this query runs almost at the same time:\n\n\n\n\nSort  (cost=4877693.98..4877702.60 rows=3449 width=20)\n (actual time=25820.291..25821.845 rows=1640 loops=1)\n  Sort Key: tt.tweet_id\n  Sort Method: quicksort  Memory: 97kB\n  Buffers: shared hit=1849 read=32788\n  ->  Nested Loop  (cost=247.58..4877491.32\n rows=3449 width=20) (actual time=486.839..25814.120\n rows=1640 loops=1)\n        Buffers: shared hit=1849 read=32788\n        ->  Hash Semi Join  (cost=229.62..88553.23\n rows=1681 width=8) (actual time=431.654..13209.159\n rows=597 loops=1)\n              Hash Cond: (t.user_id =\n relationship.followed_id)\n              Buffers: shared hit=3 read=31870\n              ->  Index Scan using\n tweet_creation_time_index on tweet t  (cost=0.57..83308.25\n rows=1781234 width=16) (actual time=130.144..10037.764\n rows=1759645 loops=1)\n                    Index Cond: ((creation_time >=\n '2013-05-05 00:00:00-03'::timestamp with time zone) AND\n (creation_time <= '2013-05-06 00:00:00-03'::timestamp\n with time zone))\n                    Buffers: shared hit=1 read=31867\n              ->  Hash  (cost=227.12..227.12\n rows=154 width=8) (actual time=94.365..94.365 rows=106\n loops=1)\n                    Buckets: 1024  Batches: 1  Memory\n Usage: 3kB\n                    Buffers: shared hit=2 read=3\n                    ->  Index Only Scan using\n relationship_id on relationship  (cost=0.42..227.12\n rows=154 width=8) (actual time=74.540..94.101 rows=106\n loops=1)\n                          Index Cond: (follower_id =\n 335093362)\n                          Heap Fetches: 0\n                          Buffers: shared hit=2 read=3\n        ->  Bitmap Heap Scan on tweet_topic tt\n  (cost=17.96..2841.63 rows=723 width=20) (actual\n time=21.014..21.085 rows=3 loops=597)\n              Recheck Cond: (tweet_id = t.id)\n              Buffers: shared hit=1846 read=918\n              ->  Bitmap Index Scan on\n tweet_topic_pk  (cost=0.00..17.78 rows=723 width=0)\n (actual time=15.012..15.012 rows=3 loops=597)\n                    Index Cond: (tweet_id = t.id)\n                    Buffers: shared hit=1763 read=632\nTotal runtime: 25823.386 ms\n\n\n\n\nI have noticed that in both queries the index scan on\n tweet_creation_time_index is very expensive. Is there anything\n I can do to make the planner choose a index only scan?\n\n\n\n\n\n Yes, because that part of the query is kicking back so many rows,\n many of which are totally unnecessary anyway - you're first getting\n all the tweets in a particular time range, then limiting them down\n to just users that are followed. Here's clarification on the\n approach I mentioned earlier. All you should really need are basic\n (btree) indexes on your different keys (tweet_topic.tweet_id,\n tweet.id, tweet.user_id, relationship.follower_id,\n relationship.followed_id). I also changed the left join to an inner\n join as somebody pointed out that your logic amounted to reducing\n the match to an inner join anyway. \n\n SELECT tt.tweet_id, tt.topic, tt.topic_value\n FROM tweet_topic AS tt\n   JOIN tweet AS t\n     ON tt.tweet_id = t.id\n   join relationship\n     on t.user_id = relationship.followed_id\n WHERE creation_time BETWEEN 'D1' AND 'D2'\n   AND relationship.follower_id = N\n ORDER BY tt.tweet_id\n ;", "msg_date": "Mon, 4 Nov 2013 21:24:07 -0200", "msg_from": "Caio Casimiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "From: Caio Casimiro [mailto:[email protected]]\nSent: Monday, November 04, 2013 4:33 PM\nTo: Igor Neyman\nCc: Jeff Janes; [email protected]\nSubject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field\n\nThese are the parameters I have set in postgresql.conf:\n\nwork_mem = 128MB\nshared_buffers = 1GB\nmaintenance_work_mem = 1536MB\nfsync = off\nsynchronous_commit = off\neffective_cache_size = 2GB\n\nThe hardware is a modest one:\nCPU: Intel(R) Atom(TM) CPU 230 @ 1.60GHz\nRAM: 2GB\nHD: 1TV 7200 RPM (WDC WD10EZEX-00RKKA0)\n\nThis machine runs a slackware 14.0 dedicated to the Postgresql.\n\nThank you,\nCaio\nWith just 2GB RAM, this:\n\nshared_buffers = 1GB\n\nand this:\n\neffective_cache_size = 2GB\n\nis too high.\n\nYou should lower those:\n\nshared_buffers = 256MB\neffective_cache_size = 1GB\n\nand see how your execution plan changes.\n\nOh, and this:\nmaintenance_work_mem = 1536MB\n\nis also too high.\nTurning off fsync and synchronous_commit is not very good idea.\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n\n \n \nFrom: Caio Casimiro [mailto:[email protected]]\n\nSent: Monday, November 04, 2013 4:33 PM\nTo: Igor Neyman\nCc: Jeff Janes; [email protected]\nSubject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field\n \n\n\n\nThese are the parameters I have set in postgresql.conf:\n\n\n \n\n\nwork_mem = 128MB\n\n\nshared_buffers = 1GB\n\n\nmaintenance_work_mem = 1536MB\n\n\nfsync = off\n\n\nsynchronous_commit = off\n\n\neffective_cache_size = 2GB\n\n\n \n\n\nThe hardware is a modest one:\n\n\nCPU: Intel(R) Atom(TM) CPU  230   @ 1.60GHz\n\n\nRAM: 2GB\n\n\nHD: 1TV 7200 RPM (WDC WD10EZEX-00RKKA0)\n\n\n \n\n\nThis machine runs a slackware 14.0 dedicated to the Postgresql. \n\n\n \n\n\nThank you,\n\n\nCaio\n\n\n\n\nWith just 2GB RAM, this:\n \nshared_buffers = 1GB\n \nand this:\n \neffective_cache_size = 2GB\n \nis too high.\n \nYou should lower those:\n \nshared_buffers = 256MB\neffective_cache_size = 1GB\n \nand see how your execution plan changes.\n \nOh, and this:\nmaintenance_work_mem = 1536MB\n \nis also too high.\nTurning off fsync and\nsynchronous_commit is not very good idea.\n \nRegards,\nIgor Neyman", "msg_date": "Tue, 5 Nov 2013 13:17:59 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "On Mon, Nov 4, 2013 at 2:10 PM, Caio Casimiro <[email protected]>wrote:\n\n>\n> You said that I would need B-Tree indexes on the fields that I want the\n> planner to use index only scan, and I think I have them already on the\n> tweet table:\n>\n> \"tweet_ios_index\" btree (id, user_id, creation_time)\n>\n> Shouldn't the tweet_ios_index be enough to make the scan over\n> tweet_creation_time_index be a index only scan?\n>\n\nYou can't efficiently scan an index when the first column in it is not\nconstrained. You would have to define the index as (creation_time,\nuser_id, id) instead to get it to use an IOS.\n\n\n\n> And, more important, would it be really faster?\n>\n\nProbably.\n\nCheers,\n\nJeff\n\nOn Mon, Nov 4, 2013 at 2:10 PM, Caio Casimiro <[email protected]> wrote:\nYou said that I would need B-Tree indexes on the fields that I want the planner to use index only scan, and I think I have them already on the tweet table:\n\n\"tweet_ios_index\" btree (id, user_id, creation_time)Shouldn't the tweet_ios_index be enough to make the scan over tweet_creation_time_index be a index only scan? \nYou can't efficiently scan an index when the first column in it is not constrained.  You would have to define the index as (creation_time, user_id, id) instead to get it to use an IOS.\n And, more important, would it be really faster?\nProbably.Cheers,Jeff", "msg_date": "Tue, 5 Nov 2013 08:10:19 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "On Sun, Nov 3, 2013 at 4:05 PM, Caio Casimiro <[email protected]> wrote:\n> System Information:\n> OS: Slackware 14.0\n> Postgresql Version: 9.3 Beta2\n\nThis probably doesn't have anything to do with your problem, but it's\nlong past time to migrate from the beta to the production 9.3.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Nov 2013 10:44:48 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "On Mon, Nov 4, 2013 at 12:44 PM, Caio Casimiro <[email protected]>wrote:\n\n> Thank you very much for your answers guys!\n>\n>\n> On Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\n>\n>> On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]>\n>> wrote:\n>>\n>>>\n>>> SELECT tt.tweet_id, tt.topic, tt.topic_value\n>>> FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id =\n>>> t.id\n>>> WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n>>> (SELECT followed_id FROM relationship WHERE follower_id = N)\n>>> ORDER BY tt.tweet_id;\n>>>\n>>\n>>\n>> I don't know if this affects the plan at all, but it is silly to do a\n>> left join to \"tweet\" when the WHERE clause has conditions that can't be\n>> satisfied with a null row. Also, you could try changing the IN-list to an\n>> EXISTS subquery.\n>>\n>\n> I'm sorry the ignorance, but I don't understand the issue with the left\n> join, could you explain more?\n>\n\nA left join means you are telling it to make up an all-NULL tweet row for\nany tweet_topic that does not have a corresponding tweet. But then once it\ndid so, it would just filter out that row later, because the null\ncreation_time and user_id cannot pass the WHERE criteria--so doing a left\njoin can't change the answer, but it can fool the planner into making a\nworse choice.\n\n\n>\n>\n>> Is there some patterns to D1 and D2 that could help the caching? For\n>> example, are they both usually in the just-recent past?\n>>\n> The only pattern is that it is always a one day interval, e.g. D1 =\n> '2013-05-01' and D2 = '2013-05-02'.\n>\n\nIf you only compare creation_time to dates, rather than ever using\ndate+time, then it would probably be better to store them in the table as\ndate, not timestamp. This might make the index smaller, and can also lead\nto better estimates and index usage.\n\nBut why would you want to offer suggestions to someone based on tweets that\nwere made on exactly one day, over 5 months ago? I can see why would want\na brief period in the immediate past, or a long period; but a brief period\nthat is not the recent past just seems like a strange thing to want to do.\n (And it is going to be hard to get good performance with that requirement.)\n\nCheers,\n\nJeff\n\nOn Mon, Nov 4, 2013 at 12:44 PM, Caio Casimiro <[email protected]> wrote:\nThank you very much for your answers guys!\n\nOn Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\n\nOn Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]> wrote:\n\n\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id = t.id            WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n            (SELECT followed_id FROM relationship WHERE follower_id = N) ORDER BY tt.tweet_id;I don't know if this affects the plan at all, but it is silly to do a left join to \"tweet\" when the WHERE clause has conditions that can't be satisfied with a null row.  Also, you could try changing the IN-list to an EXISTS subquery.\nI'm sorry the ignorance, but I don't understand the issue with the left join, could you explain more?\nA left join means you are telling it to make up an all-NULL tweet row for any tweet_topic that does not have a corresponding tweet.  But then once it did so, it would just filter out that row later, because the null creation_time and user_id cannot pass the WHERE criteria--so doing a left join can't change the answer, but it can fool the planner into making a worse choice.\n \n\nIs there some patterns to D1 and D2 that could help the caching?  For example, are they both usually in the just-recent past?\n\nThe only pattern is that it is always a one day interval, e.g. D1 = '2013-05-01' and  D2 = '2013-05-02'.If you only compare creation_time to dates, rather than ever using date+time, then it would probably be better to store them in the table as date, not timestamp.  This might make the index smaller, and can also lead to better estimates and index usage.\nBut why would you want to offer suggestions to someone based on tweets that were made on exactly one day, over 5 months ago?  I can see why would want a brief period in the immediate past, or a long period; but a brief period that is not the recent past just seems like a strange thing to want to do.  (And it is going to be hard to get good performance with that requirement.)\n Cheers,Jeff", "msg_date": "Wed, 6 Nov 2013 11:00:45 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" }, { "msg_contents": "Thank you for your considerations Jeff. Actually I'm running an experiment\nproposed by other researchers to evaluate a recommendation model.\nMy database is composed only by old tweets. In this experiment the\nrecommendation model is evaluated in a daily basis, and that's the reason\nthe query collect tweets published in a specific date.\n\nIn fact, this is not the only experiment I will run. As you pointed it is\nrather strange to recommend only tweets published in the present day. I\nhave a different experiment that will collect tweets published in a wider\ntime range.\n\nWith respect to the query's performance issue I have made some progress\nfollowing what you and Mat D said about creating an index with\ncreation_time,user_id and id.\n\nI created an index over tweet(user_id, creation_time, id) and it made the\nplanner to IOS this index.\n\nNow the most expensive part of the query is the outer nested loop (as you\ncan see at the end of the email).\nI'm considering to embed the topic information of tweets in a text field in\nthe table tweet, this way I would not need to joint the tweet_topic table.\nAnd this query without the join with the table tweet_topic is running at ~\n50ms!\nCould you recommend anything different from this approach?\n\nThank you very much,\nCasimiro\n\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n FROM tweet_topic AS tt JOIN tweet AS t ON (tt.tweet_id = t.id\n AND t.creation_time\nBETWEEN 'D1' AND 'D2' AND t.user_id in\n (SELECT followed_id FROM\nrelationship WHERE follower_id = N))\n\n\"Nested Loop (cost=1.57..5580452.55 rows=3961 width=20) (actual\ntime=33.737..5106.898 rows=3058 loops=1)\"\n\" Buffers: shared hit=3753 read=1278\"\n\" -> Nested Loop (cost=1.00..1005.35 rows=1930 width=8) (actual\ntime=0.070..77.244 rows=978 loops=1)\"\n\" Buffers: shared hit=484 read=5\"\n\" -> Index Only Scan using relationship_id on relationship\n(cost=0.42..231.12 rows=154 width=8) (actual time=0.034..0.314 rows=106\nloops=1)\"\n\" Index Cond: (follower_id = 335093362)\"\n\" Heap Fetches: 0\"\n\" Buffers: shared hit=5\"\n\" -> Index Only Scan using tweet_ios_index on tweet t\n(cost=0.57..4.90 rows=13 width=16) (actual time=0.025..0.695 rows=9\nloops=106)\"\n\" Index Cond: ((user_id = relationship.followed_id) AND\n(creation_time >= '2013-06-21 00:00:00-03'::timestamp with time zone) AND\n(creation_time <= '2013-06-22 00:00:00-03'::timestamp with time zone))\"\n\" Heap Fetches: 0\"\n\" Buffers: shared hit=479 read=5\"\n\" -> Index Scan using tweet_topic_tweet_id_index on tweet_topic tt\n(cost=0.57..2883.60 rows=731 width=20) (actual time=5.119..5.128 rows=3\nloops=978)\"\n\" Index Cond: (tweet_id = t.id)\"\n\" Buffers: shared hit=3269 read=1273\"\n\"Total runtime: 5110.217 ms\"\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nSELECT t.id FROM tweet AS t WHERE t.creation_time BETWEEN 'D1' AND 'D2' AND\nt.user_id in\n (SELECT followed_id FROM\nrelationship WHERE follower_id = N)\n\n\"Nested Loop (cost=1.00..1012.13 rows=2244 width=8) (actual\ntime=0.074..51.855 rows=877 loops=1)\"\n\" Buffers: shared hit=432 read=4\"\n\" -> Index Only Scan using relationship_id on relationship\n(cost=0.42..227.12 rows=154 width=8) (actual time=0.034..0.218 rows=106\nloops=1)\"\n\" Index Cond: (follower_id = 335093362)\"\n\" Heap Fetches: 0\"\n\" Buffers: shared hit=5\"\n\" -> Index Only Scan using tweet_ios_index on tweet t (cost=0.57..4.95\nrows=15 width=16) (actual time=0.021..0.468 rows=8 loops=106)\"\n\" Index Cond: ((user_id = relationship.followed_id) AND\n(creation_time >= '2013-06-22 00:00:00-03'::timestamp with time zone) AND\n(creation_time <= '2013-06-23 00:00:00-03'::timestamp with time zone))\"\n\" Heap Fetches: 0\"\n\" Buffers: shared hit=427 read=4\"\n\"Total runtime: 52.692 ms\"\n\n\n\nOn Wed, Nov 6, 2013 at 5:00 PM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Nov 4, 2013 at 12:44 PM, Caio Casimiro <[email protected]>wrote:\n>\n>> Thank you very much for your answers guys!\n>>\n>>\n>> On Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\n>>\n>>> On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]\n>>> > wrote:\n>>>\n>>>>\n>>>> SELECT tt.tweet_id, tt.topic, tt.topic_value\n>>>> FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id\n>>>> = t.id\n>>>> WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n>>>> (SELECT followed_id FROM relationship WHERE follower_id =\n>>>> N) ORDER BY tt.tweet_id;\n>>>>\n>>>\n>>>\n>>> I don't know if this affects the plan at all, but it is silly to do a\n>>> left join to \"tweet\" when the WHERE clause has conditions that can't be\n>>> satisfied with a null row. Also, you could try changing the IN-list to an\n>>> EXISTS subquery.\n>>>\n>>\n>> I'm sorry the ignorance, but I don't understand the issue with the left\n>> join, could you explain more?\n>>\n>\n> A left join means you are telling it to make up an all-NULL tweet row for\n> any tweet_topic that does not have a corresponding tweet. But then once it\n> did so, it would just filter out that row later, because the null\n> creation_time and user_id cannot pass the WHERE criteria--so doing a left\n> join can't change the answer, but it can fool the planner into making a\n> worse choice.\n>\n>\n>>\n>>\n>>> Is there some patterns to D1 and D2 that could help the caching? For\n>>> example, are they both usually in the just-recent past?\n>>>\n>> The only pattern is that it is always a one day interval, e.g. D1 =\n>> '2013-05-01' and D2 = '2013-05-02'.\n>>\n>\n> If you only compare creation_time to dates, rather than ever using\n> date+time, then it would probably be better to store them in the table as\n> date, not timestamp. This might make the index smaller, and can also lead\n> to better estimates and index usage.\n>\n> But why would you want to offer suggestions to someone based on tweets\n> that were made on exactly one day, over 5 months ago? I can see why would\n> want a brief period in the immediate past, or a long period; but a brief\n> period that is not the recent past just seems like a strange thing to want\n> to do. (And it is going to be hard to get good performance with that\n> requirement.)\n>\n> Cheers,\n>\n> Jeff\n>\n\nThank you for your considerations Jeff. Actually I'm running an experiment proposed by other researchers to evaluate a recommendation model.My database is composed only by old tweets. In this experiment the recommendation model is evaluated in a daily basis, and that's the reason the query collect tweets published in a specific date.\nIn fact, this is not the only experiment I will run. As you pointed it is rather strange to recommend only tweets published in the present day. I have a different experiment that will collect tweets published in a wider time range.\nWith respect to the query's performance issue I have made some progress following what you and Mat D said about creating an index with creation_time,user_id and id.I created an index over tweet(user_id, creation_time, id) and it made the planner to IOS this index.\nNow the most expensive part of the query is the outer nested loop (as you can see at the end of the email). I'm considering to embed the topic information of tweets in a text field in the table tweet, this way I would not need to joint the tweet_topic table.\nAnd this query without the join with the table tweet_topic is running at ~ 50ms!Could you recommend anything different from this approach?Thank you very much,Casimiro\nSELECT tt.tweet_id, tt.topic, tt.topic_value            FROM tweet_topic AS tt  JOIN tweet AS t ON (tt.tweet_id = t.id                                                  AND t.creation_time BETWEEN 'D1' AND 'D2' AND t.user_id in\n                                         (SELECT followed_id FROM relationship WHERE follower_id = N))\"Nested Loop  (cost=1.57..5580452.55 rows=3961 width=20) (actual time=33.737..5106.898 rows=3058 loops=1)\"\n\"  Buffers: shared hit=3753 read=1278\"\"  ->  Nested Loop  (cost=1.00..1005.35 rows=1930 width=8) (actual time=0.070..77.244 rows=978 loops=1)\"\"        Buffers: shared hit=484 read=5\"\n\"        ->  Index Only Scan using relationship_id on relationship  (cost=0.42..231.12 rows=154 width=8) (actual time=0.034..0.314 rows=106 loops=1)\"\"              Index Cond: (follower_id = 335093362)\"\n\"              Heap Fetches: 0\"\"              Buffers: shared hit=5\"\"        ->  Index Only Scan using tweet_ios_index on tweet t  (cost=0.57..4.90 rows=13 width=16) (actual time=0.025..0.695 rows=9 loops=106)\"\n\"              Index Cond: ((user_id = relationship.followed_id) AND (creation_time >= '2013-06-21 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-06-22 00:00:00-03'::timestamp with time zone))\"\n\"              Heap Fetches: 0\"\"              Buffers: shared hit=479 read=5\"\"  ->  Index Scan using tweet_topic_tweet_id_index on tweet_topic tt  (cost=0.57..2883.60 rows=731 width=20) (actual time=5.119..5.128 rows=3 loops=978)\"\n\"        Index Cond: (tweet_id = t.id)\"\"        Buffers: shared hit=3269 read=1273\"\"Total runtime: 5110.217 ms\"------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSELECT t.id FROM tweet AS t WHERE t.creation_time BETWEEN 'D1' AND 'D2' AND t.user_id in                                         (SELECT followed_id FROM relationship WHERE follower_id = N)\n\"Nested Loop  (cost=1.00..1012.13 rows=2244 width=8) (actual time=0.074..51.855 rows=877 loops=1)\"\"  Buffers: shared hit=432 read=4\"\"  ->  Index Only Scan using relationship_id on relationship  (cost=0.42..227.12 rows=154 width=8) (actual time=0.034..0.218 rows=106 loops=1)\"\n\"        Index Cond: (follower_id = 335093362)\"\"        Heap Fetches: 0\"\"        Buffers: shared hit=5\"\"  ->  Index Only Scan using tweet_ios_index on tweet t  (cost=0.57..4.95 rows=15 width=16) (actual time=0.021..0.468 rows=8 loops=106)\"\n\"        Index Cond: ((user_id = relationship.followed_id) AND (creation_time >= '2013-06-22 00:00:00-03'::timestamp with time zone) AND (creation_time <= '2013-06-23 00:00:00-03'::timestamp with time zone))\"\n\"        Heap Fetches: 0\"\"        Buffers: shared hit=427 read=4\"\"Total runtime: 52.692 ms\"On Wed, Nov 6, 2013 at 5:00 PM, Jeff Janes <[email protected]> wrote:\nOn Mon, Nov 4, 2013 at 12:44 PM, Caio Casimiro <[email protected]> wrote:\n\nThank you very much for your answers guys!\n\nOn Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes <[email protected]> wrote:\n\n\nOn Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro <[email protected]> wrote:\n\n\n\n\nSELECT tt.tweet_id, tt.topic, tt.topic_value\n            FROM tweet_topic AS tt LEFT JOIN tweet AS t ON tt.tweet_id = t.id            WHERE creation_time BETWEEN 'D1' AND 'D2' AND user_id in\n            (SELECT followed_id FROM relationship WHERE follower_id = N) ORDER BY tt.tweet_id;I don't know if this affects the plan at all, but it is silly to do a left join to \"tweet\" when the WHERE clause has conditions that can't be satisfied with a null row.  Also, you could try changing the IN-list to an EXISTS subquery.\nI'm sorry the ignorance, but I don't understand the issue with the left join, could you explain more?\nA left join means you are telling it to make up an all-NULL tweet row for any tweet_topic that does not have a corresponding tweet.  But then once it did so, it would just filter out that row later, because the null creation_time and user_id cannot pass the WHERE criteria--so doing a left join can't change the answer, but it can fool the planner into making a worse choice.\n\n \n\nIs there some patterns to D1 and D2 that could help the caching?  For example, are they both usually in the just-recent past?\n\nThe only pattern is that it is always a one day interval, e.g. D1 = '2013-05-01' and  D2 = '2013-05-02'.\nIf you only compare creation_time to dates, rather than ever using date+time, then it would probably be better to store them in the table as date, not timestamp.  This might make the index smaller, and can also lead to better estimates and index usage.\nBut why would you want to offer suggestions to someone based on tweets that were made on exactly one day, over 5 months ago?  I can see why would want a brief period in the immediate past, or a long period; but a brief period that is not the recent past just seems like a strange thing to want to do.  (And it is going to be hard to get good performance with that requirement.)\n Cheers,Jeff", "msg_date": "Wed, 6 Nov 2013 17:24:24 -0200", "msg_from": "Caio Casimiro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow index scan on B-Tree index over timestamp field" } ]
[ { "msg_contents": "I have a model like this:\n\nhttp://i.stack.imgur.com/qCZpD.png\n\nwith approximately these table sizes\n\nJOB: 8k\nDOCUMENT: 150k\nTRANSLATION_UNIT: 14,5m\nTRANSLATION: 18,3m\n\nNow the following query takes about 90 seconds to finish.\n\n*select* translation.id\n*from* \"TRANSLATION\" translation\n *inner join* \"TRANSLATION_UNIT\" unit\n *on* translation.fk_id_translation_unit = unit.id\n *inner join* \"DOCUMENT\" document\n *on* unit.fk_id_document = document.id\n*where* document.fk_id_job = 11698\n*order by* translation.id *asc*\n*limit* 50 *offset* 0\n\nQuery plan: http://explain.depesz.com/s/xlR\n\nWith the following modification, the time is reduced to 20-30 seconds (query\nplan <http://explain.depesz.com/s/VkI>)\n\n*with* CTE *as* (\n *select* tr.id\n *from* \"TRANSLATION\" tr\n *inner join *\"TRANSLATION_UNIT\" unit\n *on* tr.fk_id_translation_unit = unit.id\n *inner join* \"DOCUMENT\" doc\n *on* unit.fk_id_document = doc.id\n *where* doc.fk_id_job = 11698)\n*select* * *from *CTE\n*order by* id *asc*\n*limit* 50 *offset* 0;\n\n\nThere are about 212,000 records satisfying the query's criteria. When I\nchange 11698 to another id in the query so that there are now cca 40,000\nmatching records, the queries take 40ms and 55ms, respectively. The query\nplans also change: the original query <http://explain.depesz.com/s/cDT>, the\nCTE variant <http://explain.depesz.com/s/9ow>.\n\nIs it normal to experience 2100× increase in the execution time (or cca 450×\nfor the CTE variant) when the number of matching records grows just 5 times?\n\nI ran *ANALYZE* on all tables just before executing the queries. Indexes\nare on all columns involved.\n\nSystem info:\n\nPostgreSQL 9.2\n\nshared_buffers = 2048MB\neffective_cache_size = 4096MB\nwork_mem = 32MB\n\nTotal memory: 32GB\nCPU: Intel Xeon X3470 @ 2.93 GHz, 8MB cache\n\n\nThank you.\n\nI have a model like this:http://i.stack.imgur.com/qCZpD.pngwith approximately these table sizes\nJOB: 8kDOCUMENT: 150kTRANSLATION_UNIT: 14,5mTRANSLATION: 18,3mNow the following query takes about 90 seconds to finish.\nselect translation.idfrom \"TRANSLATION\" translation\n   inner join \"TRANSLATION_UNIT\" unit     on translation.fk_id_translation_unit = unit.id\n   inner join \"DOCUMENT\" document     on unit.fk_id_document = document.id     \nwhere document.fk_id_job = 11698order by translation.id asc\nlimit 50 offset 0Query plan: http://explain.depesz.com/s/xlR\nWith the following modification, the time is reduced to 20-30 seconds (query plan)with CTE as (\n     select tr.id     from \"TRANSLATION\" tr          inner join \"TRANSLATION_UNIT\" unit\n            on tr.fk_id_translation_unit = unit.id          inner join \"DOCUMENT\" doc\n            on unit.fk_id_document = doc.id          where doc.fk_id_job = 11698)\nselect * from CTEorder by id asc\nlimit 50 offset 0;There are about 212,000 records satisfying the query's criteria. When I change 11698 to another id in the query so that there are now cca 40,000 matching records, the queries take 40ms and 55ms, respectively. The query plans also change: the original query, the CTE variant.\nIs it normal to experience 2100× increase in the execution time (or cca 450× for the CTE variant) when the number of matching records grows just 5 times?\nI ran ANALYZE on all tables just before executing the queries. Indexes are on all columns involved.\nSystem info:\n\nPostgreSQL 9.2shared_buffers = 2048MBeffective_cache_size = 4096MB\nwork_mem = 32MB\nTotal memory: 32GBCPU: Intel Xeon X3470 @ 2.93 GHz, 8MB cache\nThank you.", "msg_date": "Tue, 5 Nov 2013 09:36:21 +0100", "msg_from": "\"Standa K.\" <[email protected]>", "msg_from_op": true, "msg_subject": "ORDER BY performance deteriorates very quickly as dataset grows" }, { "msg_contents": "I suggest to use denormalization, add column job_id to translation table\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/ORDER-BY-performance-deteriorates-very-quickly-as-dataset-grows-tp5776965p5776973.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Nov 2013 01:22:57 -0800 (PST)", "msg_from": "aasat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY performance deteriorates very quickly as dataset grows" }, { "msg_contents": " Add column job_id to translation table and create index on job_id an\ntranslation_id columns\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/ORDER-BY-performance-deteriorates-very-quickly-as-dataset-grows-tp5776965p5776974.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Nov 2013 01:24:39 -0800 (PST)", "msg_from": "aasat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY performance deteriorates very quickly as dataset grows" } ]
[ { "msg_contents": "Hi,\n\nI am in a need of a very robust (esp. fast in read, non-blocking in \nupdate) tree structure storage (95% trees are of depth <4, current max. \nis 12). We have 10k-100k trees now, millions in the future.\nI made many tests, benchmarks of usual operations, and after all, \nmaterialized path ('1.5.3' path notation) seems most promising.\n\nMy last candidates for its storage are ltree and integer[]. So I am \ncomparing the following benchmarking tables with exactly same data (5 \nregular trees - each node except leaves has 5 children, depth=9, total \nnodes=2.441.405, id numbered according to breadth-first traversal):\n\n\n*TABLES:*\nA/ integer[]\nCREATE TABLE test (\n id SERIAL,\n lpath ltree,\n CONSTRAINT test_pkey PRIMARY KEY(id)\n);\n\nCREATE INDEX test_idx1 ON test USING gist (lpath gist_ltree_ops);\n\nB/ ltree\nCREATE TABLE test (\n id SERIAL,\n apath INTEGER[],\n CONSTRAINT test_pkey PRIMARY KEY(id)\n);\n\nCREATE INDEX test_idx1 ON test USING gin (apath);\n\nSeparate single-table dbases, vacuum(analyz)ed.\n\n\n*TESTING MACHINE:*\nWindows 7, postgres 9.3.0, 4GB RAM\neffective_cache_size = 2GB\nwork_mem = 512MB\nshared_buffers = 1GB\n\n\n*THE PROBLEM:*\nMy intuition says integer[] should not be much worse than ltree (rather \nthe other way) but I am not able to reach such results. I believe I am \nmaking some trivial mistake rather than assuming false hypothesis. My \ngeneral question is, what more can I do to get better performance.\n\n\n*WHAT I DID:\n\n*/*1/ I checked gist index for integer[] via intarray extension. Query \nplans for <@ and @> operators do not use it (reported bug/feature). \nThat's why I am using gin.*/\n\n/*2/ Getting ancestors - same qplans, ltree slightly wins:*/\nA:\nselect * from test where apath@>(select apath from test where id=1)\n\nBitmap Heap Scan on test (cost=159.04..33950.48 rows=12206 width=60) \n(actual time=80.912..224.182 rows=488281 loops=1)\n Recheck Cond: (apath @> $0)\n Buffers: shared hit=18280\n InitPlan 1 (returns $0)\n -> Index Scan using test_pkey on test test_1 (cost=0.43..8.45 \nrows=1 width=56) (actual time=0.022..0.023 rows=1 loops=1)\n Index Cond: (id = 1)\n Buffers: shared hit=4\n -> Bitmap Index Scan on test_idx1 (cost=0.00..147.54 rows=12206 \nwidth=0) (actual time=76.896..76.896 rows=488281 loops=1)\n Index Cond: (apath @> $0)\n Buffers: shared hit=369\nTotal runtime: 240.408 ms\n\nB:\nselect * from test where lpath<@(select lpath from test where id=1)\n\nBitmap Heap Scan on test (cost=263.81..8674.72 rows=2445 width=83) \n(actual time=85.395..166.683 rows=488281 loops=1)\n Recheck Cond: (lpath <@ $0)\n Buffers: shared hit=22448\n InitPlan 1 (returns $0)\n -> Index Scan using test_pkey on test test_1 (cost=0.43..8.45 \nrows=1 width=79) (actual time=0.024..0.025 rows=1 loops=1)\n Index Cond: (id = 1)\n Buffers: shared hit=4\n -> Bitmap Index Scan on test_idx1 (cost=0.00..254.75 rows=2445 \nwidth=0) (actual time=83.029..83.029 rows=488281 loops=1)\n Index Cond: (lpath <@ $0)\n Buffers: shared hit=12269\nTotal runtime: 182.239 ms\n\n/*3/ Getting chosen nodes (eo) with chosen ancestors (ea) - index[] \nperforms very poorly, it's qplan uses additional Bitmap Heap Scan, still \nindices used in both cases.*/\n\nA:\nselect *\nfrom test eo\nwhere id in (select generate_series(3, 3000000, 5000)) and\n exists (\n select 1\n from test ea\n where ea.id in(select generate_series(1000, 3000, 3)) and\n ea.apath <@ eo.apath\n )\n\nNested Loop Semi Join (cost=140.10..1302851104.53 rows=6103 width=60) \n(actual time=1768.862..210525.597 rows=104 loops=1)\n Buffers: shared hit=8420909\n -> Nested Loop (cost=17.94..1652.31 rows=1220554 width=60) (actual \ntime=0.382..17.255 rows=489 loops=1)\n Buffers: shared hit=2292\n -> HashAggregate (cost=17.51..19.51 rows=200 width=4) (actual \ntime=0.352..1.486 rows=600 loops=1)\n -> Result (cost=0.00..5.01 rows=1000 width=0) (actual \ntime=0.009..0.100 rows=600 loops=1)\n -> Index Scan using test_pkey on test eo (cost=0.43..8.15 \nrows=1 width=60) (actual time=0.017..0.021 rows=1 loops=600)\n Index Cond: (id = (generate_series(3, 3000000, 5000)))\n Buffers: shared hit=2292\n -> Hash Semi Join (cost=122.16..1133.92 rows=6103 width=56) (actual \ntime=430.482..430.482 rows=0 loops=489)\n Hash Cond: (ea.id = (generate_series(1000, 3000, 3)))\n Buffers: shared hit=8418617\n -> Bitmap Heap Scan on test ea (cost=94.65..251.23 rows=12206 \nwidth=60) (actual time=271.034..430.278 rows=8 loops=489)\n Recheck Cond: (apath <@ eo.apath)\n Rows Removed by Index Recheck: 444335\n Buffers: shared hit=8418617\n -> Bitmap Index Scan on test_idx1 (cost=0.00..91.60 \nrows=12206 width=0) (actual time=155.355..155.355 rows=488281 loops=489)\n Index Cond: (apath <@ eo.apath)\n Buffers: shared hit=237668\n -> Hash (cost=15.01..15.01 rows=1000 width=4) (actual \ntime=0.305..0.305 rows=667 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 24kB\n -> Result (cost=0.00..5.01 rows=1000 width=0) (actual \ntime=0.003..0.116 rows=667 loops=1)\nTotal runtime: 210526.004 ms\n\nB:\nselect *\nfrom test eo\nwhere id in (select generate_series(3, 3000000, 5000)) and\n exists (\n select 1\n from test ea\n where ea.id in(select generate_series(1000, 3000, 3)) and\n ea.lpath @> eo.lpath\n )\n\nNested Loop Semi Join (cost=45.86..276756955.40 rows=1223 width=83) \n(actual time=2.985..226.161 rows=104 loops=1)\n Buffers: shared hit=27675\n -> Nested Loop (cost=17.94..1687.51 rows=1222486 width=83) (actual \ntime=0.660..5.987 rows=489 loops=1)\n Buffers: shared hit=2297\n -> HashAggregate (cost=17.51..19.51 rows=200 width=4) (actual \ntime=0.632..1.008 rows=600 loops=1)\n -> Result (cost=0.00..5.01 rows=1000 width=0) (actual \ntime=0.007..0.073 rows=600 loops=1)\n -> Index Scan using test_pkey on test eo (cost=0.43..8.33 \nrows=1 width=83) (actual time=0.007..0.007 rows=1 loops=600)\n Index Cond: (id = (generate_series(3, 3000000, 5000)))\n Buffers: shared hit=2297\n -> Hash Semi Join (cost=27.92..242.30 rows=1223 width=79) (actual \ntime=0.449..0.449 rows=0 loops=489)\n Hash Cond: (ea.id = (generate_series(1000, 3000, 3)))\n Buffers: shared hit=25378\n -> Index Scan using test_idx1 on test ea (cost=0.41..43.43 \nrows=2445 width=83) (actual time=0.060..0.445 rows=8 loops=489)\n Index Cond: (lpath @> eo.lpath)\n Buffers: shared hit=25378\n -> Hash (cost=15.01..15.01 rows=1000 width=4) (actual \ntime=0.178..0.178 rows=667 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 24kB\n -> Result (cost=0.00..5.01 rows=1000 width=0) (actual \ntime=0.003..0.071 rows=667 loops=1)\nTotal runtime: 226.308 ms\n\n/*3.a/ If I turn off the index for B the query is much faster:*/\nNested Loop Semi Join (cost=35.88..20535547247.01 rows=6103 width=60) \n(actual time=17.257..1112.276 rows=104 loops=1)\n Join Filter: (ea.apath <@ eo.apath)\n Rows Removed by Join Filter: 287529\n Buffers: shared hit=1155469\n -> Nested Loop (cost=17.94..1652.31 rows=1220554 width=60) (actual \ntime=0.334..5.307 rows=489 loops=1)\n Buffers: shared hit=2292\n -> HashAggregate (cost=17.51..19.51 rows=200 width=4) (actual \ntime=0.297..0.598 rows=600 loops=1)\n -> Result (cost=0.00..5.01 rows=1000 width=0) (actual \ntime=0.008..0.088 rows=600 loops=1)\n -> Index Scan using test_pkey on test eo (cost=0.43..8.15 \nrows=1 width=60) (actual time=0.007..0.007 rows=1 loops=600)\n Index Cond: (id = (generate_series(3, 3000000, 5000)))\n Buffers: shared hit=2292\n -> Nested Loop (cost=17.94..1652.31 rows=1220554 width=56) (actual \ntime=0.004..1.946 rows=588 loops=489)\n Buffers: shared hit=1153177\n -> HashAggregate (cost=17.51..19.51 rows=200 width=4) (actual \ntime=0.001..0.089 rows=588 loops=489)\n -> Result (cost=0.00..5.01 rows=1000 width=0) (actual \ntime=0.005..0.103 rows=667 loops=1)\n -> Index Scan using test_pkey on test ea (cost=0.43..8.15 \nrows=1 width=60) (actual time=0.002..0.003 rows=1 loops=287633)\n Index Cond: (id = (generate_series(1000, 3000, 3)))\n Buffers: shared hit=1153177\nTotal runtime: 1112.411 ms\n\n/*3.b/ With the index on, if I go down to effective_cache_size = 256MB, \nB also skips the index usage, same qplan as in 3.a is used. Still the \nindex is used for both versions of query 2 and ltree version of query 3.*/\n\n\n*QUESTIONS:*\n1/ Is my hypothesis about similar performance of ltree and integer[] \ncorrect?\n2/ If so, what should I do to get it?\n3/ Is there a way to improve the performance of <@ and @> operators? In \nfact, as I am having a tree path in the column, I only need to check if \npath_a 'starts_with' path_b to get ancestors/descendants. Therefore \nsomething more effective than 'contains' might be used. Is there any way?\n4/ Do I understand properly that index on integer[] is much more \nmemory-consuming, and therefore there are differences in query plans / \nexecution times?\n\n\nThanks for any help,\n\nJan\n\n\n\n\n\n\n Hi,\n\n I am in a need of a very robust (esp. fast in read, non-blocking in\n update) tree structure storage (95% trees are of depth <4,\n current max. is 12). We have 10k-100k trees now, millions in the\n future.\n I made many tests, benchmarks of usual operations, and after all,\n materialized path ('1.5.3' path notation) seems most promising.\n\n My last candidates for its storage are ltree and integer[]. So I am\n comparing the following benchmarking tables with exactly same data\n (5 regular trees - each node except leaves has 5 children, depth=9,\n total nodes=2.441.405, id numbered according to breadth-first\n traversal):\n\n\nTABLES:\n A/ integer[]\n CREATE TABLE test (\n     id SERIAL,\n     lpath ltree,\n     CONSTRAINT test_pkey PRIMARY KEY(id)\n );\n\n CREATE INDEX test_idx1 ON test USING gist (lpath gist_ltree_ops);\n\n B/ ltree\n CREATE TABLE test (\n     id SERIAL,\n     apath INTEGER[],\n     CONSTRAINT test_pkey PRIMARY KEY(id)\n );\n\n CREATE INDEX test_idx1 ON test USING gin (apath);\n\n Separate single-table dbases, vacuum(analyz)ed.\n\n\nTESTING MACHINE:\n Windows 7, postgres 9.3.0, 4GB RAM\n effective_cache_size = 2GB\n work_mem = 512MB\n shared_buffers = 1GB\n\n\nTHE PROBLEM:\n My intuition says integer[] should not be much worse than ltree\n (rather the other way) but I am not able to reach such results. I\n believe I am making some trivial mistake rather than assuming false\n hypothesis. My general question is, what more can I do to get better\n performance.\n\n\nWHAT I DID:\n\n1/ I checked gist index for integer[] via intarray\n extension. Query plans for <@ and @> operators do not use\n it (reported bug/feature). That's why I am using gin.\n\n2/ Getting ancestors - same qplans, ltree slightly wins:\n A:\n select * from test where apath@>(select apath from test where\n id=1)\n\n Bitmap Heap Scan on test  (cost=159.04..33950.48 rows=12206\n width=60) (actual time=80.912..224.182 rows=488281 loops=1)\n   Recheck Cond: (apath @> $0)\n   Buffers: shared hit=18280\n   InitPlan 1 (returns $0)\n     ->  Index Scan using test_pkey on test test_1 \n (cost=0.43..8.45 rows=1 width=56) (actual time=0.022..0.023 rows=1\n loops=1)\n           Index Cond: (id = 1)\n           Buffers: shared hit=4\n   ->  Bitmap Index Scan on test_idx1  (cost=0.00..147.54\n rows=12206 width=0) (actual time=76.896..76.896 rows=488281 loops=1)\n         Index Cond: (apath @> $0)\n         Buffers: shared hit=369\n Total runtime: 240.408 ms\n\n B:\n select * from test where lpath<@(select lpath from test where\n id=1)\n\n Bitmap Heap Scan on test  (cost=263.81..8674.72 rows=2445 width=83)\n (actual time=85.395..166.683 rows=488281 loops=1)\n   Recheck Cond: (lpath <@ $0)\n   Buffers: shared hit=22448\n   InitPlan 1 (returns $0)\n     ->  Index Scan using test_pkey on test test_1 \n (cost=0.43..8.45 rows=1 width=79) (actual time=0.024..0.025 rows=1\n loops=1)\n           Index Cond: (id = 1)\n           Buffers: shared hit=4\n   ->  Bitmap Index Scan on test_idx1  (cost=0.00..254.75\n rows=2445 width=0) (actual time=83.029..83.029 rows=488281 loops=1)\n         Index Cond: (lpath <@ $0)\n         Buffers: shared hit=12269\n Total runtime: 182.239 ms\n\n3/ Getting chosen nodes (eo) with chosen ancestors (ea) -\n index[] performs very poorly, it's qplan uses additional Bitmap\n Heap Scan, still indices used in both cases.\n\n A:\n select *\n from test eo\n where id in (select generate_series(3, 3000000, 5000)) and\n     exists (\n         select 1 \n         from test ea \n         where ea.id in(select generate_series(1000, 3000, 3)) and\n             ea.apath <@ eo.apath\n     )\n\n Nested Loop Semi Join  (cost=140.10..1302851104.53 rows=6103\n width=60) (actual time=1768.862..210525.597 rows=104 loops=1)\n   Buffers: shared hit=8420909\n   ->  Nested Loop  (cost=17.94..1652.31 rows=1220554 width=60)\n (actual time=0.382..17.255 rows=489 loops=1)\n         Buffers: shared hit=2292\n         ->  HashAggregate  (cost=17.51..19.51 rows=200 width=4)\n (actual time=0.352..1.486 rows=600 loops=1)\n               ->  Result  (cost=0.00..5.01 rows=1000 width=0)\n (actual time=0.009..0.100 rows=600 loops=1)\n         ->  Index Scan using test_pkey on test eo \n (cost=0.43..8.15 rows=1 width=60) (actual time=0.017..0.021 rows=1\n loops=600)\n               Index Cond: (id = (generate_series(3, 3000000, 5000)))\n               Buffers: shared hit=2292\n   ->  Hash Semi Join  (cost=122.16..1133.92 rows=6103 width=56)\n (actual time=430.482..430.482 rows=0 loops=489)\n         Hash Cond: (ea.id = (generate_series(1000, 3000, 3)))\n         Buffers: shared hit=8418617\n         ->  Bitmap Heap Scan on test ea  (cost=94.65..251.23\n rows=12206 width=60) (actual time=271.034..430.278 rows=8 loops=489)\n               Recheck Cond: (apath <@ eo.apath)\n               Rows Removed by Index Recheck: 444335\n               Buffers: shared hit=8418617\n               ->  Bitmap Index Scan on test_idx1 \n (cost=0.00..91.60 rows=12206 width=0) (actual time=155.355..155.355\n rows=488281 loops=489)\n                     Index Cond: (apath <@ eo.apath)\n                     Buffers: shared hit=237668\n         ->  Hash  (cost=15.01..15.01 rows=1000 width=4) (actual\n time=0.305..0.305 rows=667 loops=1)\n               Buckets: 1024  Batches: 1  Memory Usage: 24kB\n               ->  Result  (cost=0.00..5.01 rows=1000 width=0)\n (actual time=0.003..0.116 rows=667 loops=1)\n Total runtime: 210526.004 ms\n\n B:\n select *\n from test eo\n where id in (select generate_series(3, 3000000, 5000)) and\n     exists (\n         select 1 \n         from test ea \n         where ea.id in(select generate_series(1000, 3000, 3)) and\n             ea.lpath @> eo.lpath\n     )\n     \n Nested Loop Semi Join  (cost=45.86..276756955.40 rows=1223 width=83)\n (actual time=2.985..226.161 rows=104 loops=1)\n   Buffers: shared hit=27675\n   ->  Nested Loop  (cost=17.94..1687.51 rows=1222486 width=83)\n (actual time=0.660..5.987 rows=489 loops=1)\n         Buffers: shared hit=2297\n         ->  HashAggregate  (cost=17.51..19.51 rows=200 width=4)\n (actual time=0.632..1.008 rows=600 loops=1)\n               ->  Result  (cost=0.00..5.01 rows=1000 width=0)\n (actual time=0.007..0.073 rows=600 loops=1)\n         ->  Index Scan using test_pkey on test eo \n (cost=0.43..8.33 rows=1 width=83) (actual time=0.007..0.007 rows=1\n loops=600)\n               Index Cond: (id = (generate_series(3, 3000000, 5000)))\n               Buffers: shared hit=2297\n   ->  Hash Semi Join  (cost=27.92..242.30 rows=1223 width=79)\n (actual time=0.449..0.449 rows=0 loops=489)\n         Hash Cond: (ea.id = (generate_series(1000, 3000, 3)))\n         Buffers: shared hit=25378\n         ->  Index Scan using test_idx1 on test ea \n (cost=0.41..43.43 rows=2445 width=83) (actual time=0.060..0.445\n rows=8 loops=489)\n               Index Cond: (lpath @> eo.lpath)\n               Buffers: shared hit=25378\n         ->  Hash  (cost=15.01..15.01 rows=1000 width=4) (actual\n time=0.178..0.178 rows=667 loops=1)\n               Buckets: 1024  Batches: 1  Memory Usage: 24kB\n               ->  Result  (cost=0.00..5.01 rows=1000 width=0)\n (actual time=0.003..0.071 rows=667 loops=1)\n Total runtime: 226.308 ms\n\n3.a/ If I turn off the index for B the query is much faster:\n Nested Loop Semi Join  (cost=35.88..20535547247.01 rows=6103\n width=60) (actual time=17.257..1112.276 rows=104 loops=1)\n   Join Filter: (ea.apath <@ eo.apath)\n   Rows Removed by Join Filter: 287529\n   Buffers: shared hit=1155469\n   ->  Nested Loop  (cost=17.94..1652.31 rows=1220554 width=60)\n (actual time=0.334..5.307 rows=489 loops=1)\n         Buffers: shared hit=2292\n         ->  HashAggregate  (cost=17.51..19.51 rows=200 width=4)\n (actual time=0.297..0.598 rows=600 loops=1)\n               ->  Result  (cost=0.00..5.01 rows=1000 width=0)\n (actual time=0.008..0.088 rows=600 loops=1)\n         ->  Index Scan using test_pkey on test eo \n (cost=0.43..8.15 rows=1 width=60) (actual time=0.007..0.007 rows=1\n loops=600)\n               Index Cond: (id = (generate_series(3, 3000000, 5000)))\n               Buffers: shared hit=2292\n   ->  Nested Loop  (cost=17.94..1652.31 rows=1220554 width=56)\n (actual time=0.004..1.946 rows=588 loops=489)\n         Buffers: shared hit=1153177\n         ->  HashAggregate  (cost=17.51..19.51 rows=200 width=4)\n (actual time=0.001..0.089 rows=588 loops=489)\n               ->  Result  (cost=0.00..5.01 rows=1000 width=0)\n (actual time=0.005..0.103 rows=667 loops=1)\n         ->  Index Scan using test_pkey on test ea \n (cost=0.43..8.15 rows=1 width=60) (actual time=0.002..0.003 rows=1\n loops=287633)\n               Index Cond: (id = (generate_series(1000, 3000, 3)))\n               Buffers: shared hit=1153177\n Total runtime: 1112.411 ms\n\n3.b/ With the index on, if I go down to effective_cache_size =\n 256MB, B also skips the index usage, same qplan as in 3.a is\n used. Still the index is used for both versions of query 2 and\n ltree version of query 3.\n\n\nQUESTIONS:\n 1/ Is my hypothesis about similar performance of ltree and integer[]\n correct?\n 2/ If so, what should I do to get it?\n 3/ Is there a way to improve the performance of <@ and @>\n operators? In fact, as I am having a tree path in the column, I only\n need to check if path_a 'starts_with' path_b to get\n ancestors/descendants. Therefore something more effective than\n 'contains' might be used. Is there any way?\n 4/ Do I understand properly that index on integer[] is much more\n memory-consuming, and therefore there are differences in query plans\n / execution times?\n\n\n Thanks for any help,\n\n Jan", "msg_date": "Tue, 05 Nov 2013 13:25:06 +0100", "msg_from": "Jan Walter <[email protected]>", "msg_from_op": true, "msg_subject": "Trees: integer[] outperformed by ltree" }, { "msg_contents": "On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter <[email protected]> wrote:\n> Hi,\n>\n> I am in a need of a very robust (esp. fast in read, non-blocking in update)\n> tree structure storage (95% trees are of depth <4, current max. is 12). We\n> have 10k-100k trees now, millions in the future.\n> I made many tests, benchmarks of usual operations, and after all,\n> materialized path ('1.5.3' path notation) seems most promising.\n\nmaterialized path approaches tend to be ideal if the tree remains\nrelatively static and is not too deep. The downside with matpath is\nthat if a you have to move a node around in the tree, then all the\nchild elements paths' have to be expensively updated. I bring this up\nas it relates to your 'non-blocking in update' requirement: in matpath\nan update to parent can update an unbounded number of records.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Nov 2013 11:30:27 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trees: integer[] outperformed by ltree" }, { "msg_contents": "On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter <[email protected]> wrote:\n>> Hi,\n>>\n>> I am in a need of a very robust (esp. fast in read, non-blocking in update)\n>> tree structure storage (95% trees are of depth <4, current max. is 12). We\n>> have 10k-100k trees now, millions in the future.\n>> I made many tests, benchmarks of usual operations, and after all,\n>> materialized path ('1.5.3' path notation) seems most promising.\n>\n> materialized path approaches tend to be ideal if the tree remains\n> relatively static and is not too deep. The downside with matpath is\n> that if a you have to move a node around in the tree, then all the\n> child elements paths' have to be expensively updated. I bring this up\n> as it relates to your 'non-blocking in update' requirement: in matpath\n> an update to parent can update an unbounded number of records.\n\n\nhm, why do you need gist/gin for the int[] formulation? what are your\nlookup requirements? with int[] you can typically do contains with\nsimple btree.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Nov 2013 13:51:18 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trees: integer[] outperformed by ltree" }, { "msg_contents": "On 5.11.2013 20:51, Merlin Moncure wrote:\n> On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure <[email protected]> wrote:\n>> On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter <[email protected]> wrote:\n>>> Hi,\n>>>\n>>> I am in a need of a very robust (esp. fast in read, non-blocking in update)\n>>> tree structure storage (95% trees are of depth <4, current max. is 12). We\n>>> have 10k-100k trees now, millions in the future.\n>>> I made many tests, benchmarks of usual operations, and after all,\n>>> materialized path ('1.5.3' path notation) seems most promising.\n>> materialized path approaches tend to be ideal if the tree remains\n>> relatively static and is not too deep. The downside with matpath is\n>> that if a you have to move a node around in the tree, then all the\n>> child elements paths' have to be expensively updated. I bring this up\n>> as it relates to your 'non-blocking in update' requirement: in matpath\n>> an update to parent can update an unbounded number of records.\n\nThanks for your remark.\nMaterialized path is still better in updates than nested sets we are \nusing currently.\nAlthough adjacency lists with recursive CTEs were initially my favorite \nsubstitution (smallest lock space for node relocation), whey are \ncompletely failing in e.g. order by path (depth) task (150s vs. 31ms via \ninteger[]), and twice slower in simple descendant-based tasks.\nI am yet considering it (if I moved e.g. ordering to application server \nlevel), and still need to rewrite couple of more sophisticated scenarios \nto CTEs to be absolutely sure if it fails; anyway MP seems more promising.\nI also tend to have the tree structure completely independent on other \ndata belonging to nodes.\n\nOr did you have any other model in your mind?\n\n> hm, why do you need gist/gin for the int[] formulation? what are your\n> lookup requirements? with int[] you can typically do contains with\n> simple btree.\n\nI do not think so, i.e. neither my tests nor \nhttp://www.postgresql.org/docs/current/static/indexes-types.html showed \nit works for <@ or @>. Using it for <= operator puts btree index to \nquery plan in some (not all) scenarios. Still it needs to be accompanied \nwith <@, and the performance goes in more complex scenarios down.\n\n'Start with' I was mentioning, would be ideal.\n\n\nJan\n\nP. S. Just to have a feeling, this is a simple overview of my current \nbenchmarks.\n\n*scenario**\n* \t*adjacency list* \t*nested set* \t*ltree path* \t*array path* \t*neo4j*\nancestors (42) \t16ms \t31ms \t31ms \t15ms \t50ms/5ms\nancestors (1.000.000) \t16ms \t50ms \t31ms \t15ms \t\ndescendants (node 42) \t180ms \t90ms \t90ms \t140ms \t2s\ndescendants (node 1) \t4s \t2s \t2s \t2s \tabove all bounds\ndescendants 3 far \t15ms \t20s \t31ms \t65ms \t50ms\norder by path (depth) \t150s \t40ms \t31ms \t31ms \t\norder by path (width) \t\n\t\n\t\n\t\n\t\nchildren_above_fitting w/ tags \t\t850ms \t270ms \t950ms \t\nids as descendants below ids \t\t850ms \t250ms \t950ms \t\n\n\n\n\n\n\n\n\nOn 5.11.2013 20:51, Merlin Moncure\n wrote:\n\n\nOn Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure <[email protected]> wrote:\n\n\nOn Tue, Nov 5, 2013 at 6:25 AM, Jan Walter <[email protected]> wrote:\n\n\nHi,\n\nI am in a need of a very robust (esp. fast in read, non-blocking in update)\ntree structure storage (95% trees are of depth <4, current max. is 12). We\nhave 10k-100k trees now, millions in the future.\nI made many tests, benchmarks of usual operations, and after all,\nmaterialized path ('1.5.3' path notation) seems most promising.\n\n\n\nmaterialized path approaches tend to be ideal if the tree remains\nrelatively static and is not too deep. The downside with matpath is\nthat if a you have to move a node around in the tree, then all the\nchild elements paths' have to be expensively updated. I bring this up\nas it relates to your 'non-blocking in update' requirement: in matpath\nan update to parent can update an unbounded number of records.\n\n\n\n Thanks for your remark.\n Materialized path is still better in updates than nested sets we are\n using currently.\n Although adjacency lists with recursive CTEs were initially my\n favorite substitution (smallest lock space for node relocation),\n whey are completely failing in e.g. order by path (depth) task (150s\n vs. 31ms via integer[]), and twice slower in simple descendant-based\n tasks.\n I am yet considering it (if I moved e.g. ordering to application\n server level), and still need to rewrite couple of more\n sophisticated scenarios to CTEs to be absolutely sure if it fails;\n anyway MP seems more promising.\n I also tend to have the tree structure completely independent on\n other data belonging to nodes.\n\n Or did you have any other model in your mind?\n\n\nhm, why do you need gist/gin for the int[] formulation? what are your\nlookup requirements? with int[] you can typically do contains with\nsimple btree.\n\n\n\n I do not think so, i.e. neither my tests nor http://www.postgresql.org/docs/current/static/indexes-types.html\n showed it works for <@ or @>. Using it for <= operator puts\n btree index to query plan in some (not all) scenarios. Still it\n needs to be accompanied with <@, and the performance goes in more\n complex scenarios down.\n\n 'Start with' I was mentioning, would be ideal.\n\n\n Jan\n\n P. S. Just to have a feeling, this is a simple overview of my\n current benchmarks.\n\n\n \n\nscenario\n\nadjacency\n list\nnested set\nltree path\narray path\nneo4j\n\n\nancestors (42)\n16ms\n31ms\n31ms\n15ms\n50ms/5ms\n\n\nancestors (1.000.000)\n16ms\n50ms\n31ms\n15ms\n\n\n\n\ndescendants (node 42)\n180ms\n90ms\n90ms\n140ms\n2s\n\n\ndescendants (node 1)\n4s\n2s\n2s\n2s\nabove all bounds\n\n\ndescendants 3 far\n15ms\n20s\n31ms\n65ms\n50ms\n\n\norder by path (depth)\n150s\n40ms\n31ms\n31ms\n\n\n\n\norder by path (width)\n\n\n\n\n\n\n\n\n\n\n\n\nchildren_above_fitting\n w/ tags\n \n850ms\n270ms\n950ms\n\n\n\n\nids as descendants below\n ids\n \n850ms\n250ms\n950ms", "msg_date": "Tue, 05 Nov 2013 22:52:32 +0100", "msg_from": "Jan Walter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trees: integer[] outperformed by ltree" }, { "msg_contents": "On Tue, Nov 5, 2013 at 3:52 PM, Jan Walter <[email protected]> wrote:\n>\n> On 5.11.2013 20:51, Merlin Moncure wrote:\n>\n> On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure <[email protected]> wrote:\n>\n> On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter <[email protected]> wrote:\n>\n> Hi,\n>\n> I am in a need of a very robust (esp. fast in read, non-blocking in update)\n> tree structure storage (95% trees are of depth <4, current max. is 12). We\n> have 10k-100k trees now, millions in the future.\n> I made many tests, benchmarks of usual operations, and after all,\n> materialized path ('1.5.3' path notation) seems most promising.\n>\n> materialized path approaches tend to be ideal if the tree remains\n> relatively static and is not too deep. The downside with matpath is\n> that if a you have to move a node around in the tree, then all the\n> child elements paths' have to be expensively updated. I bring this up\n> as it relates to your 'non-blocking in update' requirement: in matpath\n> an update to parent can update an unbounded number of records.\n>\n>\n> Thanks for your remark.\n> Materialized path is still better in updates than nested sets we are using currently.\n> Although adjacency lists with recursive CTEs were initially my favorite substitution (smallest lock space for node relocation), whey are completely failing in e.g. order by path (depth) task (150s vs. 31ms via integer[]), and twice slower in simple descendant-based tasks.\n> I am yet considering it (if I moved e.g. ordering to application server level), and still need to rewrite couple of more sophisticated scenarios to CTEs to be absolutely sure if it fails; anyway MP seems more promising.\n> I also tend to have the tree structure completely independent on other data belonging to nodes.\n>\n> Or did you have any other model in your mind?\n>\n>\n> hm, why do you need gist/gin for the int[] formulation? what are your\n> lookup requirements? with int[] you can typically do contains with\n> simple btree.\n>\n>\n> I do not think so, i.e. neither my tests nor http://www.postgresql.org/docs/current/static/indexes-types.html showed it works for <@ or @>. Using it for <= operator puts btree index to query plan in some (not all) scenarios. Still it needs to be accompanied with <@, and the performance goes in more complex scenarios down.\n>\n> 'Start with' I was mentioning, would be ideal.\n\nThis is trivial especially if you don't have to deal with null edge\ncase. If you wan to search btree'd array column for all elements\nstarting with [1,2,3],\n\nSELECT * FROM foo\n WHERE\n id >= array[1,2,3]\n AND id < array[1,2,4];\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Nov 2013 16:19:26 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trees: integer[] outperformed by ltree" }, { "msg_contents": "On 5.11.2013 23:19, Merlin Moncure wrote:\n> On Tue, Nov 5, 2013 at 3:52 PM, Jan Walter <[email protected]> wrote:\n>> On 5.11.2013 20:51, Merlin Moncure wrote:\n>>\n>> On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure <[email protected]> wrote:\n>>\n>> On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> I am in a need of a very robust (esp. fast in read, non-blocking in update)\n>> tree structure storage (95% trees are of depth <4, current max. is 12). We\n>> have 10k-100k trees now, millions in the future.\n>> I made many tests, benchmarks of usual operations, and after all,\n>> materialized path ('1.5.3' path notation) seems most promising.\n>>\n>> materialized path approaches tend to be ideal if the tree remains\n>> relatively static and is not too deep. The downside with matpath is\n>> that if a you have to move a node around in the tree, then all the\n>> child elements paths' have to be expensively updated. I bring this up\n>> as it relates to your 'non-blocking in update' requirement: in matpath\n>> an update to parent can update an unbounded number of records.\n>>\n>>\n>> Thanks for your remark.\n>> Materialized path is still better in updates than nested sets we are using currently.\n>> Although adjacency lists with recursive CTEs were initially my favorite substitution (smallest lock space for node relocation), whey are completely failing in e.g. order by path (depth) task (150s vs. 31ms via integer[]), and twice slower in simple descendant-based tasks.\n>> I am yet considering it (if I moved e.g. ordering to application server level), and still need to rewrite couple of more sophisticated scenarios to CTEs to be absolutely sure if it fails; anyway MP seems more promising.\n>> I also tend to have the tree structure completely independent on other data belonging to nodes.\n>>\n>> Or did you have any other model in your mind?\n>>\n>>\n>> hm, why do you need gist/gin for the int[] formulation? what are your\n>> lookup requirements? with int[] you can typically do contains with\n>> simple btree.\n>>\n>>\n>> I do not think so, i.e. neither my tests nor http://www.postgresql.org/docs/current/static/indexes-types.html showed it works for <@ or @>. Using it for <= operator puts btree index to query plan in some (not all) scenarios. Still it needs to be accompanied with <@, and the performance goes in more complex scenarios down.\n>>\n>> 'Start with' I was mentioning, would be ideal.\n> This is trivial especially if you don't have to deal with null edge\n> case. If you wan to search btree'd array column for all elements\n> starting with [1,2,3],\n>\n> SELECT * FROM foo\n> WHERE\n> id >= array[1,2,3]\n> AND id < array[1,2,4];\n>\n> merlin\n\nYou meant path >= array[1,2,3], right?\nGreat for descendants. Almost the same performance (btree vs. gin) in \nsimple cases.\nFor ancestors, ids can be retrieved directly using unnest. I have to \ncheck how fast that is in real-life situations, as well as if all my \ncases can be solved without <@ operators (e.g. for descendants of \nfiltered nodes, array manipulation has to be used).\n\nStill I do not understand the behavior of gin index, and I read your \nrecommendation as *don't use it*.\n\nThanks a lot for your hints,\nJan\n\n\n\n\n\n\n\n\nOn 5.11.2013 23:19, Merlin Moncure\n wrote:\n\n\nOn Tue, Nov 5, 2013 at 3:52 PM, Jan Walter <[email protected]> wrote:\n\n\n\nOn 5.11.2013 20:51, Merlin Moncure wrote:\n\nOn Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure <[email protected]> wrote:\n\nOn Tue, Nov 5, 2013 at 6:25 AM, Jan Walter <[email protected]> wrote:\n\nHi,\n\nI am in a need of a very robust (esp. fast in read, non-blocking in update)\ntree structure storage (95% trees are of depth <4, current max. is 12). We\nhave 10k-100k trees now, millions in the future.\nI made many tests, benchmarks of usual operations, and after all,\nmaterialized path ('1.5.3' path notation) seems most promising.\n\nmaterialized path approaches tend to be ideal if the tree remains\nrelatively static and is not too deep. The downside with matpath is\nthat if a you have to move a node around in the tree, then all the\nchild elements paths' have to be expensively updated. I bring this up\nas it relates to your 'non-blocking in update' requirement: in matpath\nan update to parent can update an unbounded number of records.\n\n\nThanks for your remark.\nMaterialized path is still better in updates than nested sets we are using currently.\nAlthough adjacency lists with recursive CTEs were initially my favorite substitution (smallest lock space for node relocation), whey are completely failing in e.g. order by path (depth) task (150s vs. 31ms via integer[]), and twice slower in simple descendant-based tasks.\nI am yet considering it (if I moved e.g. ordering to application server level), and still need to rewrite couple of more sophisticated scenarios to CTEs to be absolutely sure if it fails; anyway MP seems more promising.\nI also tend to have the tree structure completely independent on other data belonging to nodes.\n\nOr did you have any other model in your mind?\n\n\nhm, why do you need gist/gin for the int[] formulation? what are your\nlookup requirements? with int[] you can typically do contains with\nsimple btree.\n\n\nI do not think so, i.e. neither my tests nor http://www.postgresql.org/docs/current/static/indexes-types.html showed it works for <@ or @>. Using it for <= operator puts btree index to query plan in some (not all) scenarios. Still it needs to be accompanied with <@, and the performance goes in more complex scenarios down.\n\n'Start with' I was mentioning, would be ideal.\n\n\n\nThis is trivial especially if you don't have to deal with null edge\ncase. If you wan to search btree'd array column for all elements\nstarting with [1,2,3],\n\nSELECT * FROM foo\n WHERE\n id >= array[1,2,3]\n AND id < array[1,2,4];\n\nmerlin\n\n\n\n You meant path >= array[1,2,3], right?\n Great for descendants. Almost the same performance (btree vs. gin)\n in simple cases.\n For ancestors, ids can be retrieved directly using unnest. I have to\n check how fast that is in real-life situations, as well as if all my\n cases can be solved without <@ operators (e.g. for descendants of\n filtered nodes, array manipulation has to be used).\n\n Still I do not understand the behavior of gin index, and I read your\n recommendation as don't use it.\n\n Thanks a lot for your hints,\n Jan", "msg_date": "Wed, 06 Nov 2013 01:27:14 +0100", "msg_from": "Jan Walter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trees: integer[] outperformed by ltree" }, { "msg_contents": "On Tue, Nov 5, 2013 at 6:27 PM, Jan Walter <[email protected]> wrote:\n> On 5.11.2013 23:19, Merlin Moncure wrote:\n> On Tue, Nov 5, 2013 at 3:52 PM, Jan Walter <[email protected]> wrote:\n> On 5.11.2013 20:51, Merlin Moncure wrote:\n> On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter <[email protected]> wrote:\n>\n> I am in a need of a very robust (esp. fast in read, non-blocking in update)\n> tree structure storage (95% trees are of depth <4, current max. is 12). We\n> have 10k-100k trees now, millions in the future.\n> I made many tests, benchmarks of usual operations, and after all,\n> materialized path ('1.5.3' path notation) seems most promising.\n>\n> materialized path approaches tend to be ideal if the tree remains\n> relatively static and is not too deep. The downside with matpath is\n> that if a you have to move a node around in the tree, then all the\n> child elements paths' have to be expensively updated. I bring this up\n> as it relates to your 'non-blocking in update' requirement: in matpath\n> an update to parent can update an unbounded number of records.\n>\n>\n> Thanks for your remark.\n> Materialized path is still better in updates than nested sets we are using\n> currently.\n> Although adjacency lists with recursive CTEs were initially my favorite\n> substitution (smallest lock space for node relocation), whey are completely\n> failing in e.g. order by path (depth) task (150s vs. 31ms via integer[]),\n> and twice slower in simple descendant-based tasks.\n> I am yet considering it (if I moved e.g. ordering to application server\n> level), and still need to rewrite couple of more sophisticated scenarios to\n> CTEs to be absolutely sure if it fails; anyway MP seems more promising.\n> I also tend to have the tree structure completely independent on other data\n> belonging to nodes.\n>\n> Or did you have any other model in your mind?\n>\n>\n> hm, why do you need gist/gin for the int[] formulation? what are your\n> lookup requirements? with int[] you can typically do contains with\n> simple btree.\n>\n>\n> I do not think so, i.e. neither my tests nor\n> http://www.postgresql.org/docs/current/static/indexes-types.html showed it\n> works for <@ or @>. Using it for <= operator puts btree index to query plan\n> in some (not all) scenarios. Still it needs to be accompanied with <@, and\n> the performance goes in more complex scenarios down.\n>\n> 'Start with' I was mentioning, would be ideal.\n>\n> This is trivial especially if you don't have to deal with null edge\n> case. If you wan to search btree'd array column for all elements\n> starting with [1,2,3],\n>\n> SELECT * FROM foo\n> WHERE\n> id >= array[1,2,3]\n> AND id < array[1,2,4];\n>\n>\n> You meant path >= array[1,2,3], right?\n\nright -- I think you get the idea.\n\n> Great for descendants. Almost the same performance (btree vs. gin) in simple\n> cases.\n> For ancestors, ids can be retrieved directly using unnest. I have to check\n> how fast that is in real-life situations, as well as if all my cases can be\n> solved without <@ operators (e.g. for descendants of filtered nodes, array\n> manipulation has to be used).\n\nThat's the key. Basically it comes down to this. If all your\nsearches are anchored from root, then you can get away with simple\nbtree. By 'anchored from root', mean you never query like this:\npath = [*,*,3,4,5]; -- where * are wild cards.\n\nThe above is only indexable by GIST/GIN. If you need to do that and\nyour dataset is large i'd be looking at ltree.\n\nThe reason why this works is that mat path exploits a property of a\ntree such that root based searches boil down to a a range of node with\nunambiguous endpoints on an ordered table. Celko's 'nested sets'\napproach (which although it gets points for very basic SQL\nrequirements I don't recommend it due to very poor scalability) also\nexploits this.\n\nAnother way to do matpath is via strings; then you can do prefix\noperations with the SQL LIKE operator (path LIKE '1.2.3.%'). Yet\nanother way (which I've never tried but plan to eventually is via\nnumerics: see http://arxiv.org/html/cs/0401014).\n\n> Still I do not understand the behavior of gin index, and I read your\n> recommendation as don't use it.\n\nI'm not saying that at all. My point is though is that if your\nrequirements are satisfied by btree then use that. GIST/GIN support a\nmuch broader array of operations but that support does come at a\nprice.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Nov 2013 08:36:09 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trees: integer[] outperformed by ltree" } ]
[ { "msg_contents": "Hi,\n\nI would like to know, how does the size of the IN list affect query planner.\nI have a query\n\nselect distinct on (event_id, tag_id) et.id,\n e.id as event_id, t.id as tag_id, t.name,\n t.user_id, t.shared, t.color,\n case\n when ea.id <> e.id then true\n else false\n end as inherited\nfrom do_event e\n join do_event ea on (ea.tree_id = e.tree_id and ea.lft <= e.lft \nand ea.rght >= e.rght)\n join do_event_tags et on (et.event_id = ea.id)\n join do_tag t on (t.id = et.tag_id)\nwhere e.id in (LIST_OF_INTEGERS) and\n (t.user_id = 14 or t.shared)\norder by event_id, tag_id, inherited;\n\nand have doubts, if the size of the list does not impact the plan \nsignificantly.\n\nIf LIST_OF_INTEGERS has <=233 values, the query is really fast:\n Unique (cost=2351.85..2353.71 rows=249 width=33) (actual \ntime=24.515..24.654 rows=163 loops=1)\n -> Sort (cost=2351.85..2352.47 rows=249 width=33) (actual \ntime=24.513..24.549 rows=166 loops=1)\n Sort Key: e.id, t.id, (CASE WHEN (ea.id <> e.id) THEN true \nELSE false END)\n Sort Method: quicksort Memory: 37kB\n -> Hash Join (cost=2217.89..2341.94 rows=249 width=33) \n(actual time=18.987..24.329 rows=166 loops=1)\n Hash Cond: (et.event_id = ea.id)\n -> Hash Join (cost=4.73..119.62 rows=1612 width=29) \n(actual time=0.151..4.634 rows=2312 loops=1)\n Hash Cond: (et.tag_id = t.id)\n -> Seq Scan on do_event_tags et (cost=0.00..79.47 \nrows=5147 width=12) (actual time=0.006..1.531 rows=5147 loops=1)\n -> Hash (cost=4.08..4.08 rows=52 width=21) \n(actual time=0.119..0.119 rows=49 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 3kB\n -> Seq Scan on do_tag t (cost=0.00..4.08 \nrows=52 width=21) (actual time=0.019..0.097 rows=49 loops=1)\n Filter: ((user_id = 14) OR shared)\n -> Hash (cost=2157.26..2157.26 rows=4472 width=8) \n(actual time=18.782..18.782 rows=270 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 11kB\n -> Nested Loop (cost=428.35..2157.26 rows=4472 \nwidth=8) (actual time=0.597..18.595 rows=270 loops=1)\n Join Filter: ((ea.lft <= e.lft) AND (ea.rght \n >= e.rght))\n -> Bitmap Heap Scan on do_event e \n(cost=428.35..926.22 rows=232 width=16) (actual time=0.568..0.895 \nrows=233 loops=1)\n Recheck Cond: (id = ANY \n('{110364,110377,42337,1503,5490,106267,106607,108419,108836,108556,108744,108466,108467,106331,3717,105404,35179,3398,5675,5896,5888,5287,4679,4275,4042,1599,4041,3311,1588,1605,1607,1606,1604,1594,1850,110494,110041,107955,110373,110068,110114,109503,109925,108959,108964,109189,109598,109142,109304,109607,107902,106668,109121,109101,109056,4621,109031,2574,5092,1674,106452,108901,108849,108713,108783,108766,108386,108455,2560,108397,1538,2007,108000,108389,108336,108456,36796,28985,108003,108421,108399,4871,106884,6371,36026,108204,108022,107941,107967,107911,107928,47944,107010,106640,107037,106994,107011,55313,105862,106332,106498,5850,13369,106161,5859,28465,106385,106444,102751,106371,105131,2610,102753,4833,4936,4755,4699,105402,14087,4798,4942,36249,55513,75790,75789,4238,6370,5744,5745,5149,4731,42297,34841,31190,17339,31155,31242,17701,17642,31203,31218,31376,5856,5141,18154,27146,17590,17566,13692,4867,1842,6365,6354,5480,5481,4382,5893,6355,5907,5886,5826,5028,4665,5230,5482,5273,4181,5091,4869,4983,4968,4961,4905,4906,4036,1483,4284,4790,4348,4648,4655,4647,4656,3075,4596,2144,4274,4592,4506,4549,4595,4188,4548,4511,4333,4306,4291,4240,4268,4114,3665,3547,1563,2102,1514,3579,3607,3501,2834,2436,3069,1400,2359,3056,3173,2897,2837,2780,2137,1447,1280,421,412,2076,1200,1691,446,1444,399,374,444,419,449}'::integer[]))\n -> Bitmap Index Scan on \ndo_event_pkey (cost=0.00..428.29 rows=232 width=0) (actual \ntime=0.538..0.538 rows=233 loops=1)\n Index Cond: (id = ANY \n('{110364,110377,42337,1503,5490,106267,106607,108419,108836,108556,108744,108466,108467,106331,3717,105404,35179,3398,5675,5896,5888,5287,4679,4275,4042,1599,4041,3311,1588,1605,1607,1606,1604,1594,1850,110494,110041,107955,110373,110068,110114,109503,109925,108959,108964,109189,109598,109142,109304,109607,107902,106668,109121,109101,109056,4621,109031,2574,5092,1674,106452,108901,108849,108713,108783,108766,108386,108455,2560,108397,1538,2007,108000,108389,108336,108456,36796,28985,108003,108421,108399,4871,106884,6371,36026,108204,108022,107941,107967,107911,107928,47944,107010,106640,107037,106994,107011,55313,105862,106332,106498,5850,13369,106161,5859,28465,106385,106444,102751,106371,105131,2610,102753,4833,4936,4755,4699,105402,14087,4798,4942,36249,55513,75790,75789,4238,6370,5744,5745,5149,4731,42297,34841,31190,17339,31155,31242,17701,17642,31203,31218,31376,5856,5141,18154,27146,17590,17566,13692,4867,1842,6365,6354,5480,5481,4382,5893,6355,5907,5886,5826,5028,4665,5230,5482,5273,4181,5091,4869,4983,4968,4961,4905,4906,4036,1483,4284,4790,4348,4648,4655,4647,4656,3075,4596,2144,4274,4592,4506,4549,4595,4188,4548,4511,4333,4306,4291,4240,4268,4114,3665,3547,1563,2102,1514,3579,3607,3501,2834,2436,3069,1400,2359,3056,3173,2897,2837,2780,2137,1447,1280,421,412,2076,1200,1691,446,1444,399,374,444,419,449}'::integer[]))\n -> Index Scan using do_event_tree_id on \ndo_event ea (cost=0.00..5.29 rows=1 width=16) (actual time=0.005..0.040 \nrows=59 loops=233)\n Index Cond: (tree_id = e.tree_id)\n Total runtime: 24.853 ms\n(24 rows)\n\nIf if has >=234 values, it is very slow:\n Unique (cost=2379.26..2381.14 rows=250 width=33) (actual \ntime=3851.030..3851.175 rows=163 loops=1)\n -> Sort (cost=2379.26..2379.89 rows=250 width=33) (actual \ntime=3851.027..3851.073 rows=166 loops=1)\n Sort Key: e.id, t.id, (CASE WHEN (ea.id <> e.id) THEN true \nELSE false END)\n Sort Method: quicksort Memory: 37kB\n -> Nested Loop (cost=139.77..2369.30 rows=250 width=33) \n(actual time=44.373..3850.597 rows=166 loops=1)\n Join Filter: ((ea.lft <= e.lft) AND (ea.rght >= e.rght))\n -> Hash Join (cost=139.77..1286.83 rows=1612 width=41) \n(actual time=10.784..41.107 rows=2312 loops=1)\n Hash Cond: (ea.id = et.event_id)\n -> Seq Scan on do_event ea (cost=0.00..840.97 \nrows=28997 width=16) (actual time=0.013..10.275 rows=28997 loops=1)\n -> Hash (cost=119.62..119.62 rows=1612 width=29) \n(actual time=10.607..10.607 rows=2312 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 137kB\n -> Hash Join (cost=4.73..119.62 rows=1612 \nwidth=29) (actual time=0.468..8.024 rows=2312 loops=1)\n Hash Cond: (et.tag_id = t.id)\n -> Seq Scan on do_event_tags et \n(cost=0.00..79.47 rows=5147 width=12) (actual time=0.007..2.578 \nrows=5147 loops=1)\n -> Hash (cost=4.08..4.08 rows=52 \nwidth=21) (actual time=0.434..0.434 rows=49 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 3kB\n -> Seq Scan on do_tag t \n(cost=0.00..4.08 rows=52 width=21) (actual time=0.030..0.405 rows=49 \nloops=1)\n Filter: ((user_id = 14) OR \nshared)\n -> Index Scan using do_event_tree_id on do_event e \n(cost=0.00..0.65 rows=1 width=16) (actual time=0.803..1.644 rows=1 \nloops=2312)\n Index Cond: (tree_id = ea.tree_id)\n Filter: (id = ANY \n('{110364,110377,42337,1503,5490,106267,106607,108419,108836,108556,108744,108466,108467,106331,3717,105404,35179,3398,5675,5896,5888,5287,4679,4275,4042,1599,4041,3311,1588,1605,1607,1606,1604,1594,1850,110494,110041,107955,110373,110068,110114,109503,109925,108959,108964,109189,109598,109142,109304,109607,107902,106668,109121,109101,109056,4621,109031,2574,5092,1674,106452,108901,108849,108713,108783,108766,108386,108455,2560,108397,1538,2007,108000,108389,108336,108456,36796,28985,108003,108421,108399,4871,106884,6371,36026,108204,108022,107941,107967,107911,107928,47944,107010,106640,107037,106994,107011,55313,105862,106332,106498,5850,13369,106161,5859,28465,106385,106444,102751,106371,105131,2610,102753,4833,4936,4755,4699,105402,14087,4798,4942,36249,55513,75790,75789,4238,6370,5744,5745,5149,4731,42297,34841,31190,17339,31155,31242,17701,17642,31203,31218,31376,5856,5141,18154,27146,17590,17566,13692,4867,1842,6365,6354,5480,5481,4382,5893,6355,5907,5886,5826,5028,4665,5230,5482,5273,4181,5091,4869,4983,4968,4961,4905,4906,4036,1483,4284,4790,4348,4648,4655,4647,4656,3075,4596,2144,4274,4592,4506,4549,4595,4188,4548,4511,4333,4306,4291,4240,4268,4114,3665,3547,1563,2102,1514,3579,3607,3501,2834,2436,3069,1400,2359,3056,3173,2897,2837,2780,2137,1447,1280,421,412,2076,1200,1691,446,1444,399,374,444,419,449,1021}'::integer[]))\n Total runtime: 3851.458 ms\n(22 rows)\n\nIncreasing effective_cache_size or work_mem does not seem to have impact.\n\n\nThanks for any hint,\n\nJohn\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 08 Nov 2013 15:04:55 +0100", "msg_from": "Jan Walter <[email protected]>", "msg_from_op": true, "msg_subject": "Size of IN list affects query plan" }, { "msg_contents": "Jan Walter <[email protected]> writes:\n> I would like to know, how does the size of the IN list affect query planner.\n\nAFAICT, the reason the second plan is slow is the large number of checks\nof the IN list. The planner does account for the cost of that, but it's\ndrastically underestimating that cost relative to the cost of I/O for the\nheap and index accesses. I suppose that your test case is fully cached in\nmemory, which helps make the CPU costs more important than I/O costs.\nIf you think this is representative of your real workload, then you\nneed to decrease random_page_cost (and maybe seq_page_cost too) to make\nthe cost estimates correspond better to that reality.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 08 Nov 2013 09:31:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Size of IN list affects query plan" }, { "msg_contents": "On Fri, Nov 8, 2013 at 6:04 AM, Jan Walter <[email protected]> wrote:\n\n> Hi,\n>\n> I would like to know, how does the size of the IN list affect query\n> planner.\n> I have a query\n>\n> select distinct on (event_id, tag_id) et.id,\n> e.id as event_id, t.id as tag_id, t.name,\n> t.user_id, t.shared, t.color,\n> case\n> when ea.id <> e.id then true\n> else false\n> end as inherited\n> from do_event e\n> join do_event ea on (ea.tree_id = e.tree_id and ea.lft <= e.lft and\n> ea.rght >= e.rght)\n> join do_event_tags et on (et.event_id = ea.id)\n> join do_tag t on (t.id = et.tag_id)\n> where e.id in (LIST_OF_INTEGERS) and\n> (t.user_id = 14 or t.shared)\n> order by event_id, tag_id, inherited;\n>\n\n\nLooking at your EXPLAIN ANALYZE plan I was immediately reminded of this\narticle\nhttp://www.datadoghq.com/2013/08/100x-faster-postgres-performance-by-changing-1-line/,\nwhere changing the array to a VALUES() clause was a huge win for them.\n\nOn Fri, Nov 8, 2013 at 6:04 AM, Jan Walter <[email protected]> wrote:\nHi,\n\nI would like to know, how does the size of the IN list affect query planner.\nI have a query\n\nselect distinct on (event_id, tag_id) et.id,\n       e.id as event_id, t.id as tag_id, t.name,\n       t.user_id, t.shared, t.color,\n       case\n         when ea.id <> e.id then true\n         else false\n       end as inherited\nfrom do_event e\n     join do_event ea on (ea.tree_id = e.tree_id and ea.lft <= e.lft and ea.rght >= e.rght)\n     join do_event_tags et on (et.event_id = ea.id)\n     join do_tag t on (t.id = et.tag_id)\nwhere e.id in (LIST_OF_INTEGERS) and\n      (t.user_id = 14 or t.shared)\norder by event_id, tag_id, inherited;Looking at your EXPLAIN ANALYZE plan I was immediately reminded of this article http://www.datadoghq.com/2013/08/100x-faster-postgres-performance-by-changing-1-line/, where changing the array to a VALUES() clause was a huge win for them.", "msg_date": "Fri, 8 Nov 2013 06:31:49 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Size of IN list affects query plan" }, { "msg_contents": "Thanks for your comments.\n\nOn 8.11.2013 15:31, Tom Lane wrote:\n> AFAICT, the reason the second plan is slow is the large number of \n> checks of the IN list. The planner does account for the cost of that, \n> but it's drastically underestimating that cost relative to the cost of \n> I/O for the heap and index accesses. I suppose that your test case is \n> fully cached in memory, which helps make the CPU costs more important \n> than I/O costs. If you think this is representative of your real \n> workload, then you need to decrease random_page_cost (and maybe \n> seq_page_cost too) to make the cost estimates correspond better to \n> that reality.\n\nI am not sure I understand it well - in the first case (fast query), \ncache is utilized in a better way? Going down with random_page_cost \ngives me fast query plans with big lists as you expected.\nI tested the slow query on different machines with (default) settings of \nseq_page_cost, and I am getting those fast query plans, too, so I am \ncurious what else could affect that (same db vacuum analyzed).\n\nAnyway it opens a question if big (tens to hundreds) IN lists is a bad \npractice, or just something that has to be used carefully. I have to \nadmit I am surprised that this rather standard technique leads to so \nwide range of performance.\n\nOn 8.11.2013 15:31, bricklen wrote:\n> Looking at your EXPLAIN ANALYZE plan I was immediately reminded of \n> this article \n> http://www.datadoghq.com/2013/08/100x-faster-postgres-performance-by-changing-1-line/, \n> where changing the array to a VALUES() clause was a huge win for them.\n\nYeah, I saw it before. Unfortunately that does not help significantly in \nmy case.\n\nJan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 14:56:23 +0100", "msg_from": "Jan Walter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Size of IN list affects query plan" } ]
[ { "msg_contents": "I would like to know, What is BitMap Heap Scan & BitMap Index Scan?When I use\nEXPLAIN for query, which has LEFT JOIN with 4 different table thensome time\nquery planner uses Bitmap Heap Scan and some time Bitmap Index Scan?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/BitMap-Heap-Scan-BitMap-Index-Scan-tp5777632.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nI would like to know, What is BitMap Heap Scan & BitMap Index Scan?\nWhen I use EXPLAIN for query, which has LEFT JOIN with 4 different table then\nsome time query planner uses Bitmap Heap Scan and some time Bitmap Index Scan?\n\n\t\n\t\n\t\n\nView this message in context: BitMap Heap Scan & BitMap Index Scan\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.", "msg_date": "Sat, 9 Nov 2013 23:32:14 -0800 (PST)", "msg_from": "monalee_dba <[email protected]>", "msg_from_op": true, "msg_subject": "BitMap Heap Scan & BitMap Index Scan" }, { "msg_contents": "http://www.postgresql.org/docs/9.3/static/using-explain.html\n\nOn Sun, Nov 10, 2013 at 4:32 PM, monalee_dba\n<[email protected]> wrote:\n> I would like to know, What is BitMap Heap Scan & BitMap Index Scan? When I\n> use EXPLAIN for query, which has LEFT JOIN with 4 different table then some\n> time query planner uses Bitmap Heap Scan and some time Bitmap Index Scan?\nThe way to go here would be to have a look at the documentation first:\nhttp://www.postgresql.org/docs/9.3/static/using-explain.html\n\nThen, AFAIK, Bitmap Heap Scan (upper level) is always coupled with\nBitmap Index Scan (lower level) so there are always in at least 2\nnodes, at least because you could have multiple Bitmap Index Scan\nnodes. The lower node Bitmap Index Scan creates a bitmap of the pages\nof the relation to track pages that might contain tuples satisfying\nthe index condition (1 bit per page, so a relation with 1 million\npages would have roughly 119kB). Then the bitmap is passed to the\nupper node called \"Bitmap Index Scan\", that reads the pages in a more\nsequential fashion.\n\nRegards,\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 13 Nov 2013 10:25:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BitMap Heap Scan & BitMap Index Scan" }, { "msg_contents": "On 10/11/13 08:32, monalee_dba wrote:\n> I would like to know, What is BitMap Heap Scan & BitMap Index Scan? When\n> I use EXPLAIN for query, which has LEFT JOIN with 4 different table then\n> some time query planner uses Bitmap Heap Scan and some time Bitmap Index\n> Scan?\n\nCheck out this great presentation:\n\n http://momjian.us/main/writings/pgsql/optimizer.pdf\n\nThe way I understand it is this (Correct me if I am wrong). The bitmap\nindex scan uses an index to build a bitmap where each bit corresponds to\na data buffer (8k). Since a buffer can contain multiple tuples and not\nall of them must match the condition another run over the heap pages is\nneeded to find the matching tuples. This is the bitmap heap scan. It\niterates over the table data buffers found in the bitmap index scan and\nselects only those tuples that match the filter (hence the recheck thing\nyou see in explain) and visibility conditions.\n\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 13 Nov 2013 10:37:46 +0100", "msg_from": "=?ISO-8859-1?Q?Torsten_F=F6rtsch?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BitMap Heap Scan & BitMap Index Scan" } ]
[ { "msg_contents": "Eg. SELECT col1, col2, col3,....col10 FROM table1;\n\nFor above query If I didn't mention ORDER BY clause, then I want to know\nselected data will appear in which order by a query planner?\n\nBecause I have huge size table, and when I applied ORDER BY col1, col2..in\nquery the\nperformance is soo bad that I can't offred. \nWhat should I do ? Because my requirement is data with ordered column.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Order-By-Clause-Slows-Query-Performance-tp5777633.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 9 Nov 2013 23:40:24 -0800 (PST)", "msg_from": "monalee_dba <[email protected]>", "msg_from_op": true, "msg_subject": "Order By Clause, Slows Query Performance?" }, { "msg_contents": "On Sun, Nov 10, 2013 at 4:40 PM, monalee_dba\n<[email protected]> wrote:\n> Eg. SELECT col1, col2, col3,....col10 FROM table1;\n>\n> For above query If I didn't mention ORDER BY clause, then I want to know\n> selected data will appear in which order by a query planner?\nThe data will be selected in the order at which it is scanned.\n\n> Because I have huge size table, and when I applied ORDER BY col1, col2..in\n> query the\n> performance is soo bad that I can't offred.\n> What should I do ? Because my requirement is data with ordered column.\nRedesign your application. The larger your relation \"table1\" would\nget, the more data you would fetch and the slower it would get. The\nbest advice I got here would be first to analyze why you need to fetch\nthat much data back to your application. Particularly, if your\napplication fetches that much data and post-treats it internally, you\nshould try to maximize such processing on the database server side and\nnot the application side to minimize the amount of data exchanged\nbetween the database server and the application client.\n\nNote that you might as well consider changing your schema.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 13 Nov 2013 10:31:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order By Clause, Slows Query Performance?" }, { "msg_contents": "monalee_dba wrote:\r\n> Eg. SELECT col1, col2, col3,....col10 FROM table1;\r\n> \r\n> For above query If I didn't mention ORDER BY clause, then I want to know\r\n> selected data will appear in which order by a query planner?\r\n> \r\n> Because I have huge size table, and when I applied ORDER BY col1, col2..in\r\n> query the\r\n> performance is soo bad that I can't offred.\r\n> What should I do ? Because my requirement is data with ordered column.\r\n\r\nA B-Tree index may help with that:\r\nhttp://www.postgresql.org/docs/current/static/indexes-ordering.html\r\nConsider a multicolumn index.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 13 Nov 2013 09:07:11 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order By Clause, Slows Query Performance?" } ]
[ { "msg_contents": "Postgres 9.1.9.\n\nexplain analyze select min(insert_time) from cnu_stats.page_hits_raw ;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.12..0.13 rows=1 width=0) (actual time=257545.835..257545.836 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..0.12 rows=1 width=8) (actual time=257545.828..257545.829 rows=1 loops=1)\n -> Index Scan using page_hits_raw_pkey on page_hits_raw (cost=0.00..5445004.65 rows=47165480 width=8) (actual time=257545.826..257545.826 rows=1 loops=1)\n Index Cond: (insert_time IS NOT NULL)\n Total runtime: 257545.881 ms\n(6 rows)\n\n\nI checked and there were no un-granted locks... but I have a hard time believing it actually too 257 seconds to get 2 pages (one index, one heap) back from our SAN.\n\nAm I missing something here?\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 15:48:09 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Horrific time for getting 1 record from an index?" }, { "msg_contents": "On Mon, Nov 11, 2013 at 1:48 PM, Jim Nasby <[email protected]> wrote:\n> Postgres 9.1.9.\n>\n> explain analyze select min(insert_time) from cnu_stats.page_hits_raw ;\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=0.12..0.13 rows=1 width=0) (actual\n> time=257545.835..257545.836 rows=1 loops=1)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..0.12 rows=1 width=8) (actual\n> time=257545.828..257545.829 rows=1 loops=1)\n> -> Index Scan using page_hits_raw_pkey on page_hits_raw\n> (cost=0.00..5445004.65 rows=47165480 width=8) (actual\n> time=257545.826..257545.826 rows=1 loops=1)\n> Index Cond: (insert_time IS NOT NULL)\n> Total runtime: 257545.881 ms\n> (6 rows)\n>\n>\n> I checked and there were no un-granted locks... but I have a hard time\n> believing it actually too 257 seconds to get 2 pages (one index, one heap)\n> back from our SAN.\n\nTry adding EXPLAIN (ANALYZE, BUFFERS). I am wondering if you are\nreading through a lot of pages addressing tuples not visible to the\ntransaction.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 13:51:40 -0800", "msg_from": "Daniel Farina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horrific time for getting 1 record from an index?" }, { "msg_contents": "On 11/11/13 3:51 PM, Daniel Farina wrote:\n> On Mon, Nov 11, 2013 at 1:48 PM, Jim Nasby <[email protected]> wrote:\n>> Postgres 9.1.9.\n>>\n>> explain analyze select min(insert_time) from cnu_stats.page_hits_raw ;\n>> I checked and there were no un-granted locks... but I have a hard time\n>> believing it actually too 257 seconds to get 2 pages (one index, one heap)\n>> back from our SAN.\n>\n> Try adding EXPLAIN (ANALYZE, BUFFERS). I am wondering if you are\n> reading through a lot of pages addressing tuples not visible to the\n> transaction.\n\nexplain (analyze,buffers) select min(insert_time) from cnu_stats.page_hits_raw ;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.12..0.13 rows=1 width=0) (actual time=119.347..119.347 rows=1 loops=1)\n Buffers: shared hit=1 read=9476\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..0.12 rows=1 width=8) (actual time=119.335..119.336 rows=1 loops=1)\n Buffers: shared hit=1 read=9476\n -> Index Scan using page_hits_raw_pkey on page_hits_raw (cost=0.00..5445004.65 rows=47165480 width=8) (actual time=119.333..119.333 rows=1 loops=1)\n Index Cond: (insert_time IS NOT NULL)\n Buffers: shared hit=1 read=9476\n Total runtime: 119.382 ms\n(9 rows)\n\nWe do run a regular process to remove older rows... I thought we were vacuuming after that process but maybe not.\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 15:57:47 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Horrific time for getting 1 record from an index?" }, { "msg_contents": "On Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <[email protected]> wrote:\n> We do run a regular process to remove older rows... I thought we were\n> vacuuming after that process but maybe not.\n\nCould be a long query, idle-in-transaction, prepared transaction, or\nhot standby feedback gone bad, too.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 13:59:49 -0800", "msg_from": "Daniel Farina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horrific time for getting 1 record from an index?" }, { "msg_contents": "On Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <[email protected]> wrote:\n\n>\n> explain (analyze,buffers) select min(insert_time) from\n> cnu_stats.page_hits_raw ;\n>\n> QUERY PLAN\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> -----------------------------------------\n> Result (cost=0.12..0.13 rows=1 width=0) (actual time=119.347..119.347\n> rows=1 loops=1)\n> Buffers: shared hit=1 read=9476\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..0.12 rows=1 width=8) (actual\n> time=119.335..119.336 rows=1 loops=1)\n> Buffers: shared hit=1 read=9476\n> -> Index Scan using page_hits_raw_pkey on page_hits_raw\n> (cost=0.00..5445004.65 rows=47165480 width=8) (actual\n> time=119.333..119.333 rows=1 loops=1)\n>\n> Index Cond: (insert_time IS NOT NULL)\n> Buffers: shared hit=1 read=9476\n> Total runtime: 119.382 ms\n> (9 rows)\n>\n> We do run a regular process to remove older rows... I thought we were\n> vacuuming after that process but maybe not.\n\n\n\nBtree indexes have special code that kill index-tuples when the table-tuple\nis dead-to-all, so only the first such query after the mass deletion\nbecomes vacuum-eligible should be slow, even if a vacuum is not done. But\nif there are long running transactions that prevent the dead rows from\ngoing out of scope, nothing can be done until those transactions go away.\n\nCheers,\n\nJeff\n\nOn Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <[email protected]> wrote:\n\nexplain (analyze,buffers) select min(insert_time) from cnu_stats.page_hits_raw ;\n                                                                           QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result  (cost=0.12..0.13 rows=1 width=0) (actual time=119.347..119.347 rows=1 loops=1)\n   Buffers: shared hit=1 read=9476\n   InitPlan 1 (returns $0)\n     ->  Limit  (cost=0.00..0.12 rows=1 width=8) (actual time=119.335..119.336 rows=1 loops=1)\n           Buffers: shared hit=1 read=9476\n           ->  Index Scan using page_hits_raw_pkey on page_hits_raw  (cost=0.00..5445004.65 rows=47165480 width=8) (actual time=119.333..119.333 rows=1 loops=1)\n                 Index Cond: (insert_time IS NOT NULL)\n                 Buffers: shared hit=1 read=9476\n Total runtime: 119.382 ms\n(9 rows)\n\nWe do run a regular process to remove older rows... I thought we were vacuuming after that process but maybe not.Btree indexes have special code that kill index-tuples when the table-tuple is dead-to-all, so only the first such query after the mass deletion becomes vacuum-eligible should be slow, even if a vacuum is not done.  But if there are long running transactions that prevent the dead rows from going out of scope, nothing can be done until those transactions go away.\nCheers,Jeff", "msg_date": "Mon, 11 Nov 2013 14:57:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horrific time for getting 1 record from an index?" }, { "msg_contents": "On 11/11/13 4:57 PM, Jeff Janes wrote:\n> On Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <[email protected] <mailto:[email protected]>> wrote:\n> Btree indexes have special code that kill index-tuples when the table-tuple is dead-to-all, so only the first such query after the mass deletion becomes vacuum-eligible should be slow, even if a vacuum is not done. But if there are long running transactions that prevent the dead rows from going out of scope, nothing can be done until those transactions go away.\n\nThere is? I didn't know that, can you point me at code?\n\nBTW, I originally had this, even after multiple queries:\n\n Buffers: shared hit=1 read=9476\n\nThen vacuum:\nINFO: index \"page_hits_raw_pkey\" now contains 50343572 row versions in 182800 pages\nDETAIL: 3466871 index row versions were removed.\n44728 index pages have been deleted, 35256 are currently reusable.\n\nThen...\n\n Buffers: shared hit=1 read=4\n\nSo I suspect a vacuum is actually needed...\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 17:28:37 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Horrific time for getting 1 record from an index?" }, { "msg_contents": "On Mon, Nov 11, 2013 at 3:28 PM, Jim Nasby <[email protected]> wrote:\n\n> On 11/11/13 4:57 PM, Jeff Janes wrote:\n>\n> On Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <[email protected] <mailto:\n>> [email protected]>> wrote:\n>> Btree indexes have special code that kill index-tuples when the\n>> table-tuple is dead-to-all, so only the first such query after the mass\n>> deletion becomes vacuum-eligible should be slow, even if a vacuum is not\n>> done. But if there are long running transactions that prevent the dead\n>> rows from going out of scope, nothing can be done until those transactions\n>> go away.\n>>\n>\n> There is? I didn't know that, can you point me at code?\n>\n\n\ngit grep \"kill_prior_tuple\"\n\n\n>\n> BTW, I originally had this, even after multiple queries:\n>\n> Buffers: shared hit=1 read=9476\n>\n> Then vacuum:\n> INFO: index \"page_hits_raw_pkey\" now contains 50343572 row versions in\n> 182800 pages\n> DETAIL: 3466871 index row versions were removed.\n> 44728 index pages have been deleted, 35256 are currently reusable.\n>\n> Then...\n>\n> Buffers: shared hit=1 read=4\n>\n> So I suspect a vacuum is actually needed...\n\n\nHmm. Maybe the kill method doesn't unlink the empty pages from the tree?\n\nCheers,\n\nJeff\n\nOn Mon, Nov 11, 2013 at 3:28 PM, Jim Nasby <[email protected]> wrote:\nOn 11/11/13 4:57 PM, Jeff Janes wrote:\n\nOn Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <[email protected] <mailto:[email protected]>> wrote:\nBtree indexes have special code that kill index-tuples when the table-tuple is dead-to-all, so only the first such query after the mass deletion becomes vacuum-eligible should be slow, even if a vacuum is not done.  But if there are long running transactions that prevent the dead rows from going out of scope, nothing can be done until those transactions go away.\n\n\nThere is? I didn't know that, can you point me at code?git grep \"kill_prior_tuple\" \n\nBTW, I originally had this, even after multiple queries:\n\n           Buffers: shared hit=1 read=9476\n\nThen vacuum:\nINFO:  index \"page_hits_raw_pkey\" now contains 50343572 row versions in 182800 pages\nDETAIL:  3466871 index row versions were removed.\n44728 index pages have been deleted, 35256 are currently reusable.\n\nThen...\n\n           Buffers: shared hit=1 read=4\n\nSo I suspect a vacuum is actually needed...Hmm.  Maybe the kill method doesn't unlink the empty pages from the tree?Cheers,Jeff", "msg_date": "Mon, 11 Nov 2013 16:30:59 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horrific time for getting 1 record from an index?" }, { "msg_contents": "On Mon, Nov 11, 2013 at 4:30 PM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Nov 11, 2013 at 3:28 PM, Jim Nasby <[email protected]> wrote:\n>\n>> On 11/11/13 4:57 PM, Jeff Janes wrote:\n>>\n>> On Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <[email protected] <mailto:\n>>> [email protected]>> wrote:\n>>> Btree indexes have special code that kill index-tuples when the\n>>> table-tuple is dead-to-all, so only the first such query after the mass\n>>> deletion becomes vacuum-eligible should be slow, even if a vacuum is not\n>>> done. But if there are long running transactions that prevent the dead\n>>> rows from going out of scope, nothing can be done until those transactions\n>>> go away.\n>>>\n>>\n>> There is? I didn't know that, can you point me at code?\n>>\n>\n>\n> git grep \"kill_prior_tuple\"\n>\n>\n>>\n>> BTW, I originally had this, even after multiple queries:\n>>\n>> Buffers: shared hit=1 read=9476\n>>\n>\nWhat were the timings like? Upon repeated execution it seems like all the\nbuffers should be loaded and so be \"hit\", not \"read\".\n\n\n\n> Then vacuum:\n>> INFO: index \"page_hits_raw_pkey\" now contains 50343572 row versions in\n>> 182800 pages\n>> DETAIL: 3466871 index row versions were removed.\n>> 44728 index pages have been deleted, 35256 are currently reusable.\n>>\n>> Then...\n>>\n>> Buffers: shared hit=1 read=4\n>>\n>> So I suspect a vacuum is actually needed...\n>\n>\n> Hmm. Maybe the kill method doesn't unlink the empty pages from the tree?\n>\n\nI verified that this is the case--the empty pages remain linked in the tree\nuntil a vacuum removes them. But walking through empty leaf pages is way\nfaster than resolving pages full of pointers to dead-to-all tuple, so the\nkill code still gives a huge benefit. But of course nothing will do much\ngood until the transaction horizon advances.\n\nCheers,\n\nJeff\n\nOn Mon, Nov 11, 2013 at 4:30 PM, Jeff Janes <[email protected]> wrote:\nOn Mon, Nov 11, 2013 at 3:28 PM, Jim Nasby <[email protected]> wrote:\n\n\nOn 11/11/13 4:57 PM, Jeff Janes wrote:\n\nOn Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <[email protected] <mailto:[email protected]>> wrote:\nBtree indexes have special code that kill index-tuples when the table-tuple is dead-to-all, so only the first such query after the mass deletion becomes vacuum-eligible should be slow, even if a vacuum is not done.  But if there are long running transactions that prevent the dead rows from going out of scope, nothing can be done until those transactions go away.\n\n\nThere is? I didn't know that, can you point me at code?git grep \"kill_prior_tuple\" \n\nBTW, I originally had this, even after multiple queries:\n\n           Buffers: shared hit=1 read=9476What were the timings like?  Upon repeated execution it seems like all the buffers should be loaded and so be \"hit\", not \"read\".\n \n\nThen vacuum:\nINFO:  index \"page_hits_raw_pkey\" now contains 50343572 row versions in 182800 pages\nDETAIL:  3466871 index row versions were removed.\n44728 index pages have been deleted, 35256 are currently reusable.\n\nThen...\n\n           Buffers: shared hit=1 read=4\n\nSo I suspect a vacuum is actually needed...Hmm.  Maybe the kill method doesn't unlink the empty pages from the tree?I verified that this is the case--the empty pages remain linked in the tree until a vacuum removes them.  But walking through empty leaf pages is way faster than resolving pages full of pointers to dead-to-all tuple, so the kill code still gives a huge benefit.  But of course nothing will do much good until the transaction horizon advances.\n Cheers,Jeff", "msg_date": "Tue, 12 Nov 2013 16:17:37 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horrific time for getting 1 record from an index?" }, { "msg_contents": "On 11/12/13 6:17 PM, Jeff Janes wrote:\n> BTW, I originally had this, even after multiple queries:\n>\n> Buffers: shared hit=1 read=9476\n>\n>\n> What were the timings like? Upon repeated execution it seems like all the buffers should be loaded and so be \"hit\", not \"read\".\n\nWell, the problem here is that this is a heavily hit 1.5TB database with 8GB of shared buffers... so stuff has to work hard to stay in buffer (and I didn't run all this immediately one after the other).\n\n> Then vacuum:\n> INFO: index \"page_hits_raw_pkey\" now contains 50343572 row versions in 182800 pages\n> DETAIL: 3466871 index row versions were removed.\n> 44728 index pages have been deleted, 35256 are currently reusable.\n>\n> Then...\n>\n> Buffers: shared hit=1 read=4\n>\n> So I suspect a vacuum is actually needed...\n>\n>\n> Hmm. Maybe the kill method doesn't unlink the empty pages from the tree?\n>\n>\n> I verified that this is the case--the empty pages remain linked in the tree until a vacuum removes them. But walking through empty leaf pages is way faster than resolving pages full of pointers to dead-to-all tuple, so the kill code still gives a huge benefit. But of course nothing will do much good until the transaction horizon advances.\n\nAaaand... that gets to the other problem... our SAN performance is pretty abysmal. It took ~270 seconds to read 80MB of index pages (+ whatever heap) to get to the first live tuple. (This was run close enough to the vacuum that I don't think visibility of these tuples would have changed mid-stream).\n-- \nJim Nasby, Lead Data Architect (512) 569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 12 Nov 2013 18:22:36 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Horrific time for getting 1 record from an index?" }, { "msg_contents": "On Tue, Nov 12, 2013 at 6:22 PM, Jim Nasby <[email protected]> wrote:\n> On 11/12/13 6:17 PM, Jeff Janes wrote:\n>>\n>> I verified that this is the case--the empty pages remain linked in the\n>> tree until a vacuum removes them. But walking through empty leaf pages is\n>> way faster than resolving pages full of pointers to dead-to-all tuple, so\n>> the kill code still gives a huge benefit. But of course nothing will do\n>> much good until the transaction horizon advances.\n>\n>\n> Aaaand... that gets to the other problem... our SAN performance is pretty\n> abysmal. It took ~270 seconds to read 80MB of index pages (+ whatever heap)\n> to get to the first live tuple. (This was run close enough to the vacuum\n> that I don't think visibility of these tuples would have changed\n> mid-stream).\n\nThat's awful, but 'par for the course' for SANs in my experience. If\nmy math is right, that works out to 27ms / page read. But each index\npage read can cause multiple heap page reads depending on how the data\nis organized so I think you are up against the laws of physics. All\nwe can do is to try and organize data so that access patterns are less\nradom and/or invest in modern hardware.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 13 Nov 2013 09:03:05 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horrific time for getting 1 record from an index?" } ]
[ { "msg_contents": "Hello all!\n\nI'm new to postgresql, so please bear with me. First of all, I have\nthe following settings enabled in my postgresql.conf file:\n\nshared_buffers = 2GB\nwork_mem = 2GB\nmaintenance_work_mem = 4GB\ncheckpoint_segments = 50\nrandom_page_cost = 3.5\ncpu_tuple_cost = 0.1\neffective_cache_size = 48GB\n\nI am trying to join a small table containing 127,375 records with a\nlarger table containing 4,830,840 records. The follow query currently\ntakes about 300ms:\n\n\nselect bigtable.a, bigtable.b, bigtable.c, count(*) from smalltable,\nbigtable where bigtable.id = smalltable.user_id and smalltable.utc\nbetween 1325376000000 and 1326721600000 group by bigtable.a,\nbigtable.b, bigtable.c;\n\n\nThere's an index on the smalltable.utc field, and bigtable.id is the\nprimary key for that table.\n\nHere's the result of running explain analyze:\n\n HashAggregate (cost=227061.05..227063.45 rows=24 width=6) (actual\ntime=388.519..388.527 rows=24 loops=1)\n -> Nested Loop (cost=0.85..226511.95 rows=54911 width=6) (actual\ntime=0.054..359.969 rows=54905 loops=1)\n -> Index Scan using smalltable_utc_idx on smalltable\n(cost=0.42..7142.13 rows=54911 width=8) (actual time=0.034..28.803\nrows=54905 loops=1)\n Index Cond: ((utc >= 1325376000000::bigint) AND (utc <=\n1326721600000::bigint))\n -> Index Scan using bigtable_pkey on bigtable\n(cost=0.43..3.90 rows=1 width=14) (actual time=0.005..0.005 rows=1\nloops=54905)\n Index Cond: (id = ht.user_id)\n Total runtime: 388.613 ms\n(7 rows)\n\nTime: 389.922 ms\n\nWhen I do \\d+, I see that bigtable is 387MB and smalltable is only\n10MB. Is there a way that I can get this query to perform faster? Or\nis this the type of performance that I can expect for this type of\njoin?\n\nThank you!\n\nRyan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Nov 2013 20:22:19 -0800", "msg_from": "Ryan LeCompte <[email protected]>", "msg_from_op": true, "msg_subject": "Unexpected slow query time when joining small table with large table" } ]
[ { "msg_contents": "Hello,\n\nDoes anything speaks again adding a \"WITH FREEZE\" option to \"CREATE TABLE AS\" ,\nsimilar to the new COPY FREEZE feature ?\n\nbest regards,\n\nMarc Mamin\n\n\n\n\n\n\n\n\n\n\n\n\n\nHello,\n \nDoes anything speaks again adding a  \"WITH FREEZE\" option to \"CREATE TABLE AS\" ,\nsimilar to the new COPY FREEZE feature ?\n \nbest regards,\n \nMarc Mamin", "msg_date": "Wed, 13 Nov 2013 10:29:14 +0000", "msg_from": "Marc Mamin <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE TABLE AS WITH FREEZE ?" } ]
[ { "msg_contents": "Hi all,\n\nhope this is the right list to post to.\nWe saw some bad choices from the query planner regarding the use of a GIN index which got worse over time and performance started degrading seriously, so I did some digging and I found a solution which works, but I'd like to get some opinion on.\n\nHere is the table in question:\n\n Table \"public.games\"\n Column | Type | Modifiers | Storage | Stats target | Description \n------------------+-----------------------------+----------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('games_id_seq'::regclass) | plain | | \n runners | smallint | | plain | | \n player_id | integer | | plain | 1000 | \n partner1_id | integer | | plain | 1000 | \n partner2_id | integer | | plain | 1000 | \n partner3_id | integer | | plain | 1000 | \n created_at | timestamp without time zone | | plain | | \nIndexes:\n \"games_pkey\" PRIMARY KEY, btree (id)\n \"index_games_on_created_at\" btree (created_at)\n \"index_games_participants\" gin ((ARRAY[player_id, partner1_id, partner2_id, partner3_id])) WITH (fastupdate=off)\nHas OIDs: no\n\nI removed some columns from the output for clarity,. It has 300+ million rows. And is freshly analyzed.\nAs you see, I've already increased the stats targets for the columns which go into the GIN index before, but this had no visible effect on query plan choices.\nHere's a typical query:\n\nEXPLAIN (analyze, buffers) SELECT \"games\".* FROM \"games\" WHERE (ABS(runners) >= '3') AND ((ARRAY[player_id, partner1_id, partner2_id, partner3_id]) @> ARRAY[166866]) ORDER BY id DESC LIMIT 20 OFFSET 0;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.57..13639.64 rows=20 width=74) (actual time=330.271..12372.777 rows=20 loops=1)\n Buffers: shared hit=3453594 read=119394\n -> Index Scan Backward using games_pkey on games (cost=0.57..15526034.64 rows=22767 width=74) (actual time=330.269..12372.763 rows=20 loops=1)\n Filter: ((ARRAY[player_id, partner1_id, partner2_id, partner3_id] @> '{166866}'::integer[]) AND (abs(runners) >= 3::smallint))\n Rows Removed by Filter: 3687711\n Buffers: shared hit=3453594 read=119394\n Total runtime: 12372.848 ms\n(7 rows)\n\n\nThis is plan is not the best choice, though. It would be much more efficient to use the index_games_participants index. For some queries, there would be not enough records which fullfill the conditions so bascially every row of the table is scanned.\nAs \\d+ index_games_participants showed that the index had an \"array\" column, I found this:\n\nSELECT attname, attstattarget from pg_attribute WHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'index_games_participants');\n attname | attstattarget \n---------+---------------\n array | -1\n(1 row)\n\n\nAlso, I noticed that for that \"array\" GIN index column there is content in pg_statistics, where as for the btree indices there isn't.\nBecause I didn't find any documentation or references on setting statistic targets on indices, I just gave it a shot:\n\nALTER TABLE index_games_participants ALTER COLUMN \"array\" SET STATISTICS 1000;\n\nAfter running ANALYZE on the table:\n\nEXPLAIN (analyze, buffers) SELECT \"games\".* FROM \"games\" WHERE (ABS(runners) >= '3') AND ((ARRAY[player_id, partner1_id, partner2_id, partner3_id]) @> ARRAY[166866]) ORDER BY id DESC LIMIT 20 OFFSET 0;\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=33947.27..33947.32 rows=20 width=74) (actual time=624.308..624.341 rows=20 loops=1)\n Buffers: shared hit=4 read=17421\n -> Sort (cost=33947.27..33961.61 rows=5736 width=74) (actual time=624.306..624.318 rows=20 loops=1)\n Sort Key: id\n Sort Method: top-N heapsort Memory: 27kB\n Buffers: shared hit=4 read=17421\n -> Bitmap Heap Scan on games (cost=164.49..33794.64 rows=5736 width=74) (actual time=6.704..621.592 rows=1963 loops=1)\n Recheck Cond: (ARRAY[player_id, partner1_id, partner2_id, partner3_id] @> '{166866}'::integer[])\n Filter: (abs(runners) >= 3::smallint)\n Rows Removed by Filter: 17043\n Buffers: shared hit=1 read=17421\n -> Bitmap Index Scan on index_games_participants (cost=0.00..163.05 rows=17207 width=0) (actual time=4.012..4.012 rows=19300 loops=1)\n Index Cond: (ARRAY[player_id, partner1_id, partner2_id, partner3_id] @> '{166866}'::integer[])\n Buffers: shared hit=1 read=19\n Total runtime: 624.572 ms\n(15 rows)\n\nMuch better! This reduced the bad plan choices substantially.\nAlso, as one could expect, SELECT * from pg_statistic WHERE starelid = (SELECT oid FROM pg_class WHERE relname = 'index_games_participants'); now had much more data.\n\nIs this a good idea? Am I missing something? Or should the GIN index actually use the statistic targets derived from the table columns it depends on?\n\nBest,\nDieter\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 14 Nov 2013 15:36:57 +0100", "msg_from": "Dieter Komendera <[email protected]>", "msg_from_op": true, "msg_subject": "Bad plan choices & statistic targets with a GIN index" }, { "msg_contents": "Dieter Komendera <[email protected]> writes:\n> Because I didn't find any documentation or references on setting statistic targets on indices, I just gave it a shot:\n\n> ALTER TABLE index_games_participants ALTER COLUMN \"array\" SET STATISTICS 1000;\n\nThis works, and will help if the planner can make use of statistics on the\nexpression the index is indexing. The reason it's not documented is that\nit's not considered supported (yet), primarily because pg_dump won't dump\nsuch a setting. And the reason for that is mainly that the column names\nof an index aren't guaranteed stable across PG versions. But as long\nas you're willing to remember to restore the setting manually, it's\na reasonable thing to do if it helps your query plans.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 14 Nov 2013 10:00:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan choices & statistic targets with a GIN index" } ]
[ { "msg_contents": "Hey everyone --\n\nI am debugging an issue with our Postgres machine running on EC2. We are\nexperiencing slowness when retrieving about 14k rows from a larger table of\n140MM rows. Initially I thought it was an indexing problem (doing VACUUM\nFULL reduced the index size from 12gb to 8gb), but the slowness persisted.\n\nI created another table with only the subset of data we are interested in,\nand simply doing SELECT * on the table takes 21ms, as opposed to 2ms on my\nMBP. I examined the relationship between the length of the table scan on\nthis table with 14032 rows and the query time, and I got these results:\n\n Table \"public.patrick_component\"\n Column | Type | Modifiers\n------------------+---------+-----------\n id | integer |\n case_id | integer |\n type_id | integer |\n offset | integer |\n length | integer |\n internal_id | integer |\n parent_id | integer |\n right_sibling_id | integer |\n\n# Rows MBP EC2\n1 0.035 ms 0.076 ms\n10 0.017 ms 0.048 ms\n100 0.033 ms 0.316 ms\n1000 0.279 ms 3.166 ms\n10000 2.477 ms 31.006 ms\n100000 4.375 ms 42.634 ms # there are fewer than 100k rows in the table;\nfor some reason LIMIT is slower than without LIMIT\n\nAs such, I have decided that it's not an issue with the index. To me this\nlooks disk caching related, however, the entire table is only 832k, which\nshould be plenty small to fit entirely into memory (I also ran this\nmultiple times and in reverse, and the results are the same).\n\nThe machine has 30gb of memory for a 45g database. The machine's only\npurpose is for Postgres.\n\nHere are the relevant performance tweaks I have made:\nshared_buffers = 8448MB\nwork_mem = 100MB\nmaintenance_work_mem = 1024MB\nwal_buffers = 8MB\neffective_cache_size = 22303MB\n\nI have been struggling to make these types of query fast because they are\nvery common (basically fetching all of the metadata for a document, and we\nhave a lot of metadata and a lot of documents). Any help is appreciated!\n\nThanks,\nPatrick\n\nHey everyone -- I am debugging an issue with our Postgres machine running on EC2. We are experiencing slowness when retrieving about 14k rows from a larger table of 140MM rows. Initially I thought it was an indexing problem (doing VACUUM FULL reduced the index size from 12gb to 8gb), but the slowness persisted. \nI created another table with only the subset of data we are interested in, and simply doing SELECT * on the table takes 21ms, as opposed to 2ms on my MBP. I examined the relationship between the length of the table scan on this table with 14032 rows and the query time, and I got these results:\n    Table \"public.patrick_component\"      Column      |  Type   | Modifiers\n------------------+---------+----------- id               | integer | case_id          | integer |\n type_id          | integer | offset           | integer | length           | integer |\n internal_id      | integer | parent_id        | integer | right_sibling_id | integer |\n# Rows  MBP       EC2 1       0.035 ms  0.076 ms10      0.017 ms  0.048 ms\n100     0.033 ms  0.316 ms1000    0.279 ms  3.166 ms10000   2.477 ms  31.006 ms100000  4.375 ms  42.634 ms # there are fewer than 100k rows in the table; for some reason LIMIT is slower than without LIMIT\nAs such, I have decided that it's not an issue with the index. To me this looks disk caching related, however, the entire table is only 832k, which should be plenty small to fit entirely into memory (I also ran this multiple times and in reverse, and the results are the same).\nThe machine has 30gb of memory for a 45g database. The machine's only purpose is for Postgres.Here are the relevant performance tweaks I have made:shared_buffers = 8448MB\nwork_mem = 100MBmaintenance_work_mem = 1024MBwal_buffers = 8MBeffective_cache_size = 22303MBI have been struggling to make these types of query fast because they are very common (basically fetching all of the metadata for a document, and we have a lot of metadata and a lot of documents). Any help is appreciated!\nThanks,Patrick", "msg_date": "Fri, 15 Nov 2013 14:24:53 -0800", "msg_from": "Patrick Krecker <[email protected]>", "msg_from_op": true, "msg_subject": "General slowness when retrieving a relatively small number of rows" } ]
[ { "msg_contents": "Hi,\n\nI am need help, about subject \"Query cache in Postgres\".\nhow is it possible to put sql statements cached in postgres ?\nI did some tests and I can not get even with improved tuning\nparameters in the postgresql.conf.\n\nRegards,\n\n-- \nAssinatura de E-mail padr�o Riosoft 20 anos\n\n*Rogerio Pereira *\n [email protected]\nRiosoft - Sistemas para Gest�o Empresarial <http://www.riosoft.com.br>\n\n\nRespeite o meio-ambiente, n�o desperdice papel, imprima somente o \nnecess�rio.\nEsta mensagem, incluindo os seus anexos, cont�m informa��es \nconfidenciais destinadas a indiv�duo e prop�sito espec�ficos e � \nprotegida por lei. Caso voc� n�o seja o destinat�rio desta mensagem, \ndeve apag�-la. � proibida a utiliza��o, acesso, c�pia ou divulga��o n�o \nautorizada das informa��es presentes neste e-mail. As informa��es \ncontidas nesta mensagem e em seus anexos s�o de responsabilidade de seu \nautor, n�o representando id�ias, opini�es, pensamentos da Riosoft.", "msg_date": "Mon, 18 Nov 2013 14:38:09 -0200", "msg_from": "Rogerio Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Query in cache" }, { "msg_contents": "Rogerio Pereira <[email protected]> wrote:\n\n> Hi,\n>\n> I am need help, about subject \"Query cache in Postgres\".\n> how is it possible to put sql statements cached in postgres ?\n> I did some tests and I can not get even with improved tuning\n> parameters in the postgresql.conf.\n\nNo, there isn't something like a query cache. But you can use\nmaterialized views, since 9.3.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 18 Nov 2013 17:43:56 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query in cache" }, { "msg_contents": "On Mon, Nov 18, 2013 at 02:38:09PM -0200, Rogerio Pereira wrote:\n> Hi,\n> \n> I am need help, about subject \"Query cache in Postgres\".\n> how is it possible to put sql statements cached in postgres ?\n> I did some tests and I can not get even with improved tuning\n> parameters in the postgresql.conf.\n> \n> Regards,\n> \n\nHi Rogerio,\n\nPostgreSQL does not have a query cache. I think you would need to\nroll your own.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 18 Nov 2013 10:57:15 -0600", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query in cache" }, { "msg_contents": "Hello,\n\npgpool supports memcache.\nRegards\n\n\n\n\nOn Monday, November 18, 2013 6:29 PM, Andreas Kretschmer <[email protected]> wrote:\n \nRogerio Pereira <[email protected]> wrote:\n\n\n> Hi,\n>\n> I am need help, about subject \"Query cache in  Postgres\".\n> how is it possible to put sql statements cached in postgres ?\n> I did some tests and I can not get even with improved tuning\n> parameters in the postgresql.conf.\n\nNo, there isn't something like a query cache. But you can use\nmaterialized views, since 9.3.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect.                              (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\"  (unknown)\nKaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nHello,pgpool supports memcache.Regards \n On Monday, November 18, 2013 6:29 PM, Andreas Kretschmer <[email protected]> wrote: Rogerio Pereira <[email protected]> wrote:> Hi,>> I am need help, about subject \"Query cache in  Postgres\".> how is it possible to put sql statements cached in postgres ?> I did some tests and I can not get even with improved tuning> parameters in the postgresql.conf.No, there isn't something like a query cache. But you can usematerialized views, since 9.3.Andreas-- Really, I'm not out to destroy Microsoft. That will just be a completelyunintentional side effect.                              (Linus Torvalds)\"If I was god, I would recompile penguin with --enable-fly.\"  (unknown)Kaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 18 Nov 2013 16:36:20 -0800 (PST)", "msg_from": "salah jubeh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query in cache" }, { "msg_contents": "2013/11/18 Rogerio Pereira <[email protected]>\n> I am need help, about subject \"Query cache in Postgres\".\n> how is it possible to put sql statements cached in postgres ?\n> I did some tests and I can not get even with improved tuning\n> parameters in the postgresql.conf.\n\nAre you talking about prepared statements or about query result caching?\n\nIf former then you need to look at the PREPARE for execute statement\n[1], though it is probably implemented in your data adapter, for\nexample like it is in DBD::Pg [2]. Also take a look at the pre_prepare\nmodule [3], that can conveniently be used with pgbouncer.\n\nIf later then there is an extension named pgmemcache [4] that will\nallow you to interact with memcached directly from postgres, so you\ncould implement cashing in stored functions, for example. However my\nadvice is to use application level caching with memcached in this\ncase, not the database level one.\n\n[1] http://www.postgresql.org/docs/9.3/static/sql-prepare.html\n[2] http://search.cpan.org/dist/DBD-Pg/Pg.pm#prepare\n[3] https://github.com/dimitri/preprepare\n[4] https://github.com/ohmu/pgmemcache/\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 18 Nov 2013 17:37:11 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query in cache" }, { "msg_contents": "On Mon, Nov 18, 2013 at 04:36:20PM -0800, salah jubeh wrote:\n> Hello,\n> \n> pgpool supports memcache.\n> Regards\n> \nVery cool. I had missed that in the release announcement.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Nov 2013 08:00:44 -0600", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query in cache" }, { "msg_contents": "Hi,\n\nIs possible send howto to active 'feature' ?\n\nRegards,\n\nRogerio\n\nEm 19/11/2013 12:00, [email protected] escreveu:\n> On Mon, Nov 18, 2013 at 04:36:20PM -0800, salah jubeh wrote:\n>> Hello,\n>>\n>> pgpool supports memcache.\n>> Regards\n>>\n> Very cool. I had missed that in the release announcement.\n>\n> Regards,\n> Ken\n>\n>\n\n\n-- \nAssinatura de E-mail padr�o Riosoft 20 anos\n\n*Rogerio Pereira *\nDba | Tel. 55 (17) 3215-9199 | [email protected]\nRiosoft - Sistemas para Gest�o Empresarial <http://www.riosoft.com.br>\n\n\nRespeite o meio-ambiente, n�o desperdice papel, imprima somente o \nnecess�rio.\nEsta mensagem, incluindo os seus anexos, cont�m informa��es \nconfidenciais destinadas a indiv�duo e prop�sito espec�ficos e � \nprotegida por lei. Caso voc� n�o seja o destinat�rio desta mensagem, \ndeve apag�-la. � proibida a utiliza��o, acesso, c�pia ou divulga��o n�o \nautorizada das informa��es presentes neste e-mail. As informa��es \ncontidas nesta mensagem e em seus anexos s�o de responsabilidade de seu \nautor, n�o representando id�ias, opini�es, pensamentos da Riosoft.", "msg_date": "Tue, 19 Nov 2013 13:50:24 -0200", "msg_from": "Rogerio Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query in cache" }, { "msg_contents": "Hi.\n\nI'm trying install pgmemcache and occurs error:\n\npgmemcache.h:29:23: error: sasl/sasl.h: No such file or directory\n\nExists solution exists to fix to problem?\n\nVersion the postgres: 9.3.1 with CentOS\nLink reference: \nhttp://raghavt.blogspot.com.br/2011/07/pgmemcache-setup-and-usage.html\n\nRegards,\n-- \nAssinatura de E-mail padr�o Riosoft 20 anos\n\n*Rogerio Pereira *\nDba | Tel. 55 (17) 3215-9199 | [email protected]\nRiosoft - Sistemas para Gest�o Empresarial <http://www.riosoft.com.br>\n\n\nRespeite o meio-ambiente, n�o desperdice papel, imprima somente o \nnecess�rio.\nEsta mensagem, incluindo os seus anexos, cont�m informa��es \nconfidenciais destinadas a indiv�duo e prop�sito espec�ficos e � \nprotegida por lei. Caso voc� n�o seja o destinat�rio desta mensagem, \ndeve apag�-la. � proibida a utiliza��o, acesso, c�pia ou divulga��o n�o \nautorizada das informa��es presentes neste e-mail. As informa��es \ncontidas nesta mensagem e em seus anexos s�o de responsabilidade de seu \nautor, n�o representando id�ias, opini�es, pensamentos da Riosoft.", "msg_date": "Wed, 20 Nov 2013 10:36:42 -0200", "msg_from": "Rogerio Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Error install -pgmemcache" }, { "msg_contents": "Hi,\n\nWhat is the maximum amount of VM that can be created?\nVersion 9.3\n\nTks,\n-- \nAssinatura de E-mail padr�o Riosoft 20 anos\n\n*Rogerio Pereira *\nDba | Tel. 55 (17) 3215-9199 | [email protected]\nRiosoft - Sistemas para Gest�o Empresarial <http://www.riosoft.com.br>\n\n\nRespeite o meio-ambiente, n�o desperdice papel, imprima somente o \nnecess�rio.\nEsta mensagem, incluindo os seus anexos, cont�m informa��es \nconfidenciais destinadas a indiv�duo e prop�sito espec�ficos e � \nprotegida por lei. Caso voc� n�o seja o destinat�rio desta mensagem, \ndeve apag�-la. � proibida a utiliza��o, acesso, c�pia ou divulga��o n�o \nautorizada das informa��es presentes neste e-mail. As informa��es \ncontidas nesta mensagem e em seus anexos s�o de responsabilidade de seu \nautor, n�o representando id�ias, opini�es, pensamentos da Riosoft.", "msg_date": "Fri, 22 Nov 2013 10:06:19 -0200", "msg_from": "Rogerio Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query in cache" }, { "msg_contents": "Hi,\n\nOn Wed, 2013-11-20 at 10:36 -0200, Rogerio Pereira wrote:\n\n> I'm trying install pgmemcache and occurs error:\n> \n> pgmemcache.h:29:23: error: sasl/sasl.h: No such file or directory\n> \n> Exists solution exists to fix to problem?\n\nYou will need to install cyrus-sasl-devel package. \n\nStill, I just uploaded the RPM package to yum repo:\n\nhttp://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/repoview/pgmemcache-93.html\n\nRegards,\n-- \nDevrim GÜNDÜZ\nPrincipal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Sun, 29 Dec 2013 22:08:41 +0200", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error install -pgmemcache" } ]
[ { "msg_contents": "Hi,\n\nWhich can be the error :\n\n-- could not send data to client: Broken pipe\n-- FATAL: connection to client lost\n-- \n\nRegards,\n\nAssinatura de E-mail padr�o Riosoft 20 anos\n\n*Rogerio Pereira *\nDba | Tel. 55 (17) 3215-9199 | [email protected]\nRiosoft - Sistemas para Gest�o Empresarial <http://www.riosoft.com.br>\n\n\nRespeite o meio-ambiente, n�o desperdice papel, imprima somente o \nnecess�rio.\nEsta mensagem, incluindo os seus anexos, cont�m informa��es \nconfidenciais destinadas a indiv�duo e prop�sito espec�ficos e � \nprotegida por lei. Caso voc� n�o seja o destinat�rio desta mensagem, \ndeve apag�-la. � proibida a utiliza��o, acesso, c�pia ou divulga��o n�o \nautorizada das informa��es presentes neste e-mail. As informa��es \ncontidas nesta mensagem e em seus anexos s�o de responsabilidade de seu \nautor, n�o representando id�ias, opini�es, pensamentos da Riosoft.", "msg_date": "Mon, 18 Nov 2013 16:27:57 -0200", "msg_from": "Rogerio Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Error Broken pipe in log postgres" }, { "msg_contents": "2013/11/18 Rogerio Pereira <[email protected]>\n\n> Which can be the error :\n>\n> -- could not send data to client: Broken pipe\n> -- FATAL: connection to client lost\n>\n\nIt means the client program disconnected from the Postgres server (or was\nkilled) before the server finished a query, and the server had no place to\nsend the answer.\n\nCraig\n\n2013/11/18 Rogerio Pereira <[email protected]>\n\nWhich can be the error\n :\n\n\n\n -- could not send data to client: Broken pipe\n -- FATAL:  connection to client lostIt means the client program disconnected from the Postgres server (or was killed) before the server finished a query, and the server had no place to send the answer.\nCraig", "msg_date": "Mon, 18 Nov 2013 11:49:00 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error Broken pipe in log postgres" } ]
[ { "msg_contents": "I have found this:\n\nSELECT c.*\nFROM contacts c\nWHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\nOR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\n\nTo have a worse plan than:\n\nSELECT * FROM contacts where id IN (\n( SELECT c.id FROM contacts c\nJOIN phone p ON c.id = p.contact_id AND p.addr = ?\nUNION\nSELECT c.id FROM contacts c\nJOIN email e ON c.id = e.contact_id AND e.addr = ? );\n\nMaybe this is no surprise. But after discovering this my question is this,\nis there another option I dont' know about that is logically the same that\ncan perform even better than the UNION?\n\nI have found this:SELECT c.* FROM contacts cWHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )OR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\nTo have a worse plan than:SELECT * FROM contacts where id IN (( SELECT c.id FROM contacts c JOIN phone p ON c.id = p.contact_id AND p.addr = ? \nUNIONSELECT c.id FROM contacts c JOIN email e ON c.id = e.contact_id AND e.addr = ? );Maybe this is no surprise. But after discovering this my question is this, is there another option I dont' know about that is logically the same that can perform even better than the UNION?", "msg_date": "Thu, 21 Nov 2013 12:20:29 -0800", "msg_from": "Robert DiFalco <[email protected]>", "msg_from_op": true, "msg_subject": "UNION versus SUB SELECT" }, { "msg_contents": "Hi Robert, could you try with \"exists\" ?\n\nSELECT c.*\nFROM contacts c\nWHERE exists ( SELECT 1 FROM phone p WHERE p.addr =? and p.contact_id=\nc.id )\nOR exists (SELECT 1 FROM email e WHERE e.addr = ? and e.contact_id=c.id );\n\n\n\n\n\n2013/11/21 Robert DiFalco <[email protected]>\n\n> I have found this:\n>\n> SELECT c.*\n> FROM contacts c\n> WHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\n> OR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\n>\n> To have a worse plan than:\n>\n> SELECT * FROM contacts where id IN (\n> ( SELECT c.id FROM contacts c\n> JOIN phone p ON c.id = p.contact_id AND p.addr = ?\n> UNION\n> SELECT c.id FROM contacts c\n> JOIN email e ON c.id = e.contact_id AND e.addr = ? );\n>\n> Maybe this is no surprise. But after discovering this my question is this,\n> is there another option I dont' know about that is logically the same that\n> can perform even better than the UNION?\n>\n\nHi Robert, could you try with \"exists\" ?SELECT c.* FROM contacts cWHERE  exists  ( SELECT  1 FROM phone p WHERE p.addr =? and  p.contact_id=c.id )\nOR exists (SELECT  1 FROM email e WHERE e.addr = ? and  e.contact_id=c.id );2013/11/21 Robert DiFalco <[email protected]>\nI have found this:SELECT c.* FROM contacts cWHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\nOR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\nTo have a worse plan than:SELECT * FROM contacts where id IN (( SELECT c.id FROM contacts c JOIN phone p ON c.id = p.contact_id AND p.addr = ? \nUNIONSELECT c.id FROM contacts c JOIN email e ON c.id = e.contact_id AND e.addr = ? );\nMaybe this is no surprise. But after discovering this my question is this, is there another option I dont' know about that is logically the same that can perform even better than the UNION?", "msg_date": "Thu, 21 Nov 2013 21:31:52 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "UNION and subselect both performed better than EXISTS for this particular\ncase.\n\n\nOn Thu, Nov 21, 2013 at 12:31 PM, desmodemone <[email protected]> wrote:\n\n> Hi Robert, could you try with \"exists\" ?\n>\n> SELECT c.*\n> FROM contacts c\n> WHERE exists ( SELECT 1 FROM phone p WHERE p.addr =? and p.contact_id=\n> c.id )\n> OR exists (SELECT 1 FROM email e WHERE e.addr = ? and e.contact_id=c.id);\n>\n>\n>\n>\n>\n> 2013/11/21 Robert DiFalco <[email protected]>\n>\n>> I have found this:\n>>\n>> SELECT c.*\n>> FROM contacts c\n>> WHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\n>> OR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\n>>\n>> To have a worse plan than:\n>>\n>> SELECT * FROM contacts where id IN (\n>> ( SELECT c.id FROM contacts c\n>> JOIN phone p ON c.id = p.contact_id AND p.addr = ?\n>> UNION\n>> SELECT c.id FROM contacts c\n>> JOIN email e ON c.id = e.contact_id AND e.addr = ? );\n>>\n>> Maybe this is no surprise. But after discovering this my question is\n>> this, is there another option I dont' know about that is logically the same\n>> that can perform even better than the UNION?\n>>\n>\n>\n\nUNION and subselect both performed better than EXISTS for this particular case.On Thu, Nov 21, 2013 at 12:31 PM, desmodemone <[email protected]> wrote:\nHi Robert, could you try with \"exists\" ?SELECT c.* FROM contacts c\nWHERE  exists  ( SELECT  1 FROM phone p WHERE p.addr =? and  p.contact_id=c.id )\nOR exists (SELECT  1 FROM email e WHERE e.addr = ? and  e.contact_id=c.id );2013/11/21 Robert DiFalco <[email protected]>\nI have found this:SELECT c.* FROM contacts cWHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\nOR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\nTo have a worse plan than:SELECT * FROM contacts where id IN (( SELECT c.id FROM contacts c JOIN phone p ON c.id = p.contact_id AND p.addr = ? \nUNIONSELECT c.id FROM contacts c JOIN email e ON c.id = e.contact_id AND e.addr = ? );\nMaybe this is no surprise. But after discovering this my question is this, is there another option I dont' know about that is logically the same that can perform even better than the UNION?", "msg_date": "Thu, 21 Nov 2013 12:36:37 -0800", "msg_from": "Robert DiFalco <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "Could you please attache the plan with explain buffers verbose?\n\nthank you\n\n\n2013/11/21 Robert DiFalco <[email protected]>\n\n> UNION and subselect both performed better than EXISTS for this particular\n> case.\n>\n>\n> On Thu, Nov 21, 2013 at 12:31 PM, desmodemone <[email protected]>wrote:\n>\n>> Hi Robert, could you try with \"exists\" ?\n>>\n>> SELECT c.*\n>> FROM contacts c\n>> WHERE exists ( SELECT 1 FROM phone p WHERE p.addr =? and p.contact_id=\n>> c.id )\n>> OR exists (SELECT 1 FROM email e WHERE e.addr = ? and e.contact_id=c.id);\n>>\n>>\n>>\n>>\n>>\n>> 2013/11/21 Robert DiFalco <[email protected]>\n>>\n>>> I have found this:\n>>>\n>>> SELECT c.*\n>>> FROM contacts c\n>>> WHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\n>>> OR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\n>>>\n>>> To have a worse plan than:\n>>>\n>>> SELECT * FROM contacts where id IN (\n>>> ( SELECT c.id FROM contacts c\n>>> JOIN phone p ON c.id = p.contact_id AND p.addr = ?\n>>> UNION\n>>> SELECT c.id FROM contacts c\n>>> JOIN email e ON c.id = e.contact_id AND e.addr = ? );\n>>>\n>>> Maybe this is no surprise. But after discovering this my question is\n>>> this, is there another option I dont' know about that is logically the same\n>>> that can perform even better than the UNION?\n>>>\n>>\n>>\n>\n\nCould you please attache the plan with explain buffers verbose?thank you2013/11/21 Robert DiFalco <[email protected]>\nUNION and subselect both performed better than EXISTS for this particular case.\nOn Thu, Nov 21, 2013 at 12:31 PM, desmodemone <[email protected]> wrote:\nHi Robert, could you try with \"exists\" ?SELECT c.* FROM contacts c\nWHERE  exists  ( SELECT  1 FROM phone p WHERE p.addr =? and  p.contact_id=c.id )\nOR exists (SELECT  1 FROM email e WHERE e.addr = ? and  e.contact_id=c.id );2013/11/21 Robert DiFalco <[email protected]>\nI have found this:SELECT c.* FROM contacts cWHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\nOR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\nTo have a worse plan than:SELECT * FROM contacts where id IN (( SELECT c.id FROM contacts c JOIN phone p ON c.id = p.contact_id AND p.addr = ? \nUNIONSELECT c.id FROM contacts c JOIN email e ON c.id = e.contact_id AND e.addr = ? );\nMaybe this is no surprise. But after discovering this my question is this, is there another option I dont' know about that is logically the same that can perform even better than the UNION?", "msg_date": "Thu, 21 Nov 2013 21:38:06 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "Sorry I couldn't get buffers to work but here is the explain analyze\nverbose:\n\ndft1fjfv106r48=> explain analyze verbose select c.*\n\n from contacts c\n\n where c.id IN (\n\n select p.contact_id\n\n from\nphone_numbers p\n\n where (p.national = 5038904993 and p.e164 = '+15038904993'))\n\n or c.id IN (\n\n select e.contact_id\n\n from\nemail_addresses e\n\n where e.email = '[email protected]')\n\n ;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c (cost=8.12..75.73 rows=1988 width=95)\n(actual time=0.410..0.410 rows=0 loops=1)\n Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call,\nc.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n Rows Removed by Filter: 2849\n SubPlan 1\n -> Index Scan using idx_phone_address on public.phone_numbers p\n (cost=0.06..4.06 rows=1 width=8) (actual time=0.015..0.015 rows=0 loops=1)\n Output: p.contact_id\n Index Cond: ((p.\"national\" = 5038904993::bigint) AND\n((p.e164)::text = '+15038904993'::text))\n SubPlan 2\n -> Index Scan using idx_email_address on public.email_addresses e\n (cost=0.06..4.06 rows=1 width=8) (actual time=0.018..0.018 rows=0 loops=1)\n Output: e.contact_id\n Index Cond: ((e.email)::text = '[email protected]'::text)\n Total runtime: 0.489 ms\n(13 rows)\n\ndft1fjfv106r48=> explain analyze verbose select c.*\n\n from contacts c\n\n where exists(\n\n select 1\n\n from\nphone_numbers p\n\n where (p.national = 5038904993 and p.e164 = '+15038904993') and\np.contact_id = c.id)\n or EXISTS(\n\n select 1\n\n from\nemail_addresses e\n\n where e.email = '[email protected]' and e.contact_id = c.id)\n\n ;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c (cost=0.00..21596.38 rows=1988 width=95)\n(actual time=0.479..0.479 rows=0 loops=1)\n Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call,\nc.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) OR (alternatives:\nSubPlan 3 or hashed SubPlan 4))\n Rows Removed by Filter: 2849\n SubPlan 1\n -> Index Scan using idx_phone_address on public.phone_numbers p\n (cost=0.06..4.06 rows=1 width=0) (never executed)\n Index Cond: ((p.\"national\" = 5038904993::bigint) AND\n((p.e164)::text = '+15038904993'::text))\n Filter: (p.contact_id = c.id)\n SubPlan 2\n -> Index Scan using idx_phone_address on public.phone_numbers p_1\n (cost=0.06..4.06 rows=1 width=8) (actual time=0.010..0.010 rows=0 loops=1)\n Output: p_1.contact_id\n Index Cond: ((p_1.\"national\" = 5038904993::bigint) AND\n((p_1.e164)::text = '+15038904993'::text))\n SubPlan 3\n -> Index Scan using idx_email_address on public.email_addresses e\n (cost=0.06..4.06 rows=1 width=0) (never executed)\n Index Cond: ((e.email)::text = '[email protected]'::text)\n Filter: (e.contact_id = c.id)\n SubPlan 4\n -> Index Scan using idx_email_address on public.email_addresses e_1\n (cost=0.06..4.06 rows=1 width=8) (actual time=0.016..0.016 rows=0 loops=1)\n Output: e_1.contact_id\n Index Cond: ((e_1.email)::text = '[email protected]'::text)\n Total runtime: 0.559 ms\n(21 rows)\n\ndft1fjfv106r48=> explain analyze verbose select * from contacts where id IN\n(\n (select c.id\n\n from contacts c\n\n join phone_numbers p on c.id =\np.contact_id and p.national = 5038904993 and p.e164 = '+15038904993')\n union (select\nc.id\n\n from contacts c\n\n join email_addresses e on c.id = e.contact_id and e.email\n= '[email protected]'));\n\n QUERY\nPLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=16.31..24.39 rows=2 width=95) (actual time=0.060..0.060\nrows=0 loops=1)\n Output: contacts.id, contacts.owner_id, contacts.user_id,\ncontacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype,\ncontacts.blocked, contacts.details_hash, contacts.fname, contacts.lname,\ncontacts.fb_id\n -> Unique (cost=16.26..16.26 rows=2 width=8) (actual time=0.057..0.057\nrows=0 loops=1)\n Output: c.id\n -> Sort (cost=16.26..16.26 rows=2 width=8) (actual\ntime=0.055..0.055 rows=0 loops=1)\n Output: c.id\n Sort Key: c.id\n Sort Method: quicksort Memory: 25kB\n -> Append (cost=0.11..16.25 rows=2 width=8) (actual\ntime=0.034..0.034 rows=0 loops=1)\n -> Nested Loop (cost=0.11..8.12 rows=1 width=8)\n(actual time=0.013..0.013 rows=0 loops=1)\n Output: c.id\n -> Index Scan using idx_phone_address on\npublic.phone_numbers p (cost=0.06..4.06 rows=1 width=8) (actual\ntime=0.011..0.011 rows=0 loops=1)\n Output: p.id, p.contact_id, p.\"national\",\np.e164, p.raw_number\n Index Cond: ((p.\"national\" =\n5038904993::bigint) AND ((p.e164)::text = '+15038904993'::text))\n -> Index Only Scan using\nidx_contacts_pkey_owner on public.contacts c (cost=0.06..4.06 rows=1\nwidth=8) (never executed)\n Output: c.id, c.owner_id, c.user_id\n Index Cond: (c.id = p.contact_id)\n Heap Fetches: 0\n -> Nested Loop (cost=0.11..8.12 rows=1 width=8)\n(actual time=0.018..0.018 rows=0 loops=1)\n Output: c_1.id\n -> Index Scan using idx_email_address on\npublic.email_addresses e (cost=0.06..4.06 rows=1 width=8) (actual\ntime=0.016..0.016 rows=0 loops=1)\n Output: e.id, e.contact_id, e.email\n Index Cond: ((e.email)::text = '\[email protected]'::text)\n -> Index Only Scan using\nidx_contacts_pkey_owner on public.contacts c_1 (cost=0.06..4.06 rows=1\nwidth=8) (never executed)\n Output: c_1.id, c_1.owner_id, c_1.user_id\n Index Cond: (c_1.id = e.contact_id)\n Heap Fetches: 0\n -> Index Scan using idx_contacts_pkey_owner on public.contacts\n (cost=0.06..4.06 rows=1 width=95) (never executed)\n Output: contacts.id, contacts.owner_id, contacts.user_id,\ncontacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype,\ncontacts.blocked, contacts.details_hash, contacts.fname, contacts.lname,\ncontacts.fb_id\n Index Cond: (contacts.id = c.id)\n Total runtime: 0.332 ms\n(31 rows)\n\n\n\n\nOn Thu, Nov 21, 2013 at 12:38 PM, desmodemone <[email protected]> wrote:\n\n> Could you please attache the plan with explain buffers verbose?\n>\n> thank you\n>\n>\n> 2013/11/21 Robert DiFalco <[email protected]>\n>\n>> UNION and subselect both performed better than EXISTS for this particular\n>> case.\n>>\n>>\n>> On Thu, Nov 21, 2013 at 12:31 PM, desmodemone <[email protected]>wrote:\n>>\n>>> Hi Robert, could you try with \"exists\" ?\n>>>\n>>> SELECT c.*\n>>> FROM contacts c\n>>> WHERE exists ( SELECT 1 FROM phone p WHERE p.addr =? and\n>>> p.contact_id=c.id )\n>>> OR exists (SELECT 1 FROM email e WHERE e.addr = ? and e.contact_id=\n>>> c.id );\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> 2013/11/21 Robert DiFalco <[email protected]>\n>>>\n>>>> I have found this:\n>>>>\n>>>> SELECT c.*\n>>>> FROM contacts c\n>>>> WHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\n>>>> OR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\n>>>>\n>>>> To have a worse plan than:\n>>>>\n>>>> SELECT * FROM contacts where id IN (\n>>>> ( SELECT c.id FROM contacts c\n>>>> JOIN phone p ON c.id = p.contact_id AND p.addr = ?\n>>>> UNION\n>>>> SELECT c.id FROM contacts c\n>>>> JOIN email e ON c.id = e.contact_id AND e.addr = ? );\n>>>>\n>>>> Maybe this is no surprise. But after discovering this my question is\n>>>> this, is there another option I dont' know about that is logically the same\n>>>> that can perform even better than the UNION?\n>>>>\n>>>\n>>>\n>>\n>\n\nSorry I couldn't get buffers to work but here is the explain analyze verbose:dft1fjfv106r48=> explain analyze verbose select c.*                                                                                                                    from contacts c                                                                                                                                                        where c.id IN (                                                                                                                                                            select p.contact_id                                                                                                                                                    from phone_numbers p                                                                                                                                                   where (p.national = 5038904993 and p.e164 = '+15038904993'))                                                                                                       or c.id IN (                                                                                                                                                               select e.contact_id                                                                                                                                                    from email_addresses e                                                                                                                                                 where e.email = '[email protected]')                                                                                                                          ;\n                                                                     QUERY PLAN                                                                     ----------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c  (cost=8.12..75.73 rows=1988 width=95) (actual time=0.410..0.410 rows=0 loops=1)   Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call, c.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n   Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))   Rows Removed by Filter: 2849   SubPlan 1     ->  Index Scan using idx_phone_address on public.phone_numbers p  (cost=0.06..4.06 rows=1 width=8) (actual time=0.015..0.015 rows=0 loops=1)\n           Output: p.contact_id           Index Cond: ((p.\"national\" = 5038904993::bigint) AND ((p.e164)::text = '+15038904993'::text))   SubPlan 2     ->  Index Scan using idx_email_address on public.email_addresses e  (cost=0.06..4.06 rows=1 width=8) (actual time=0.018..0.018 rows=0 loops=1)\n           Output: e.contact_id           Index Cond: ((e.email)::text = '[email protected]'::text) Total runtime: 0.489 ms(13 rows)\ndft1fjfv106r48=> explain analyze verbose select c.*                                                                                                                    from contacts c                                                                                                                                                        where exists(                                                                                                                                                              select 1                                                                                                                                                               from phone_numbers p                                                                                                                                                   where (p.national = 5038904993 and p.e164 = '+15038904993') and p.contact_id = c.id)                                                                               or EXISTS(                                                                                                                                                                 select 1                                                                                                                                                               from email_addresses e                                                                                                                                                 where e.email = '[email protected]' and e.contact_id = c.id)                                                                                                  ;\n                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c  (cost=0.00..21596.38 rows=1988 width=95) (actual time=0.479..0.479 rows=0 loops=1)   Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call, c.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n   Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) OR (alternatives: SubPlan 3 or hashed SubPlan 4))   Rows Removed by Filter: 2849   SubPlan 1     ->  Index Scan using idx_phone_address on public.phone_numbers p  (cost=0.06..4.06 rows=1 width=0) (never executed)\n           Index Cond: ((p.\"national\" = 5038904993::bigint) AND ((p.e164)::text = '+15038904993'::text))           Filter: (p.contact_id = c.id)   SubPlan 2\n     ->  Index Scan using idx_phone_address on public.phone_numbers p_1  (cost=0.06..4.06 rows=1 width=8) (actual time=0.010..0.010 rows=0 loops=1)           Output: p_1.contact_id           Index Cond: ((p_1.\"national\" = 5038904993::bigint) AND ((p_1.e164)::text = '+15038904993'::text))\n   SubPlan 3     ->  Index Scan using idx_email_address on public.email_addresses e  (cost=0.06..4.06 rows=1 width=0) (never executed)           Index Cond: ((e.email)::text = '[email protected]'::text)\n           Filter: (e.contact_id = c.id)   SubPlan 4     ->  Index Scan using idx_email_address on public.email_addresses e_1  (cost=0.06..4.06 rows=1 width=8) (actual time=0.016..0.016 rows=0 loops=1)\n           Output: e_1.contact_id           Index Cond: ((e_1.email)::text = '[email protected]'::text) Total runtime: 0.559 ms\n(21 rows)dft1fjfv106r48=> explain analyze verbose select * from contacts where id IN (                                                                                          (select c.id                                                                                                                                                           from contacts c                                                                                                                                                        join phone_numbers p on c.id = p.contact_id and p.national = 5038904993 and p.e164 = '+15038904993')                                                                   union (select c.id                                                                                                                                                     from contacts c                                                                                                                                                        join email_addresses e on c.id = e.contact_id and e.email = '[email protected]'));                                                                                                                                                                                              QUERY PLAN                                                                                                               ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=16.31..24.39 rows=2 width=95) (actual time=0.060..0.060 rows=0 loops=1)   Output: contacts.id, contacts.owner_id, contacts.user_id, contacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype, contacts.blocked, contacts.details_hash, contacts.fname, contacts.lname, contacts.fb_id\n   ->  Unique  (cost=16.26..16.26 rows=2 width=8) (actual time=0.057..0.057 rows=0 loops=1)         Output: c.id         ->  Sort  (cost=16.26..16.26 rows=2 width=8) (actual time=0.055..0.055 rows=0 loops=1)\n               Output: c.id               Sort Key: c.id               Sort Method: quicksort  Memory: 25kB               ->  Append  (cost=0.11..16.25 rows=2 width=8) (actual time=0.034..0.034 rows=0 loops=1)\n                     ->  Nested Loop  (cost=0.11..8.12 rows=1 width=8) (actual time=0.013..0.013 rows=0 loops=1)                           Output: c.id                           ->  Index Scan using idx_phone_address on public.phone_numbers p  (cost=0.06..4.06 rows=1 width=8) (actual time=0.011..0.011 rows=0 loops=1)\n                                 Output: p.id, p.contact_id, p.\"national\", p.e164, p.raw_number                                 Index Cond: ((p.\"national\" = 5038904993::bigint) AND ((p.e164)::text = '+15038904993'::text))\n                           ->  Index Only Scan using idx_contacts_pkey_owner on public.contacts c  (cost=0.06..4.06 rows=1 width=8) (never executed)                                 Output: c.id, c.owner_id, c.user_id\n                                 Index Cond: (c.id = p.contact_id)                                 Heap Fetches: 0                     ->  Nested Loop  (cost=0.11..8.12 rows=1 width=8) (actual time=0.018..0.018 rows=0 loops=1)\n                           Output: c_1.id                           ->  Index Scan using idx_email_address on public.email_addresses e  (cost=0.06..4.06 rows=1 width=8) (actual time=0.016..0.016 rows=0 loops=1)\n                                 Output: e.id, e.contact_id, e.email                                 Index Cond: ((e.email)::text = '[email protected]'::text)\n                           ->  Index Only Scan using idx_contacts_pkey_owner on public.contacts c_1  (cost=0.06..4.06 rows=1 width=8) (never executed)                                 Output: c_1.id, c_1.owner_id, c_1.user_id\n                                 Index Cond: (c_1.id = e.contact_id)                                 Heap Fetches: 0   ->  Index Scan using idx_contacts_pkey_owner on public.contacts  (cost=0.06..4.06 rows=1 width=95) (never executed)\n         Output: contacts.id, contacts.owner_id, contacts.user_id, contacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype, contacts.blocked, contacts.details_hash, contacts.fname, contacts.lname, contacts.fb_id\n         Index Cond: (contacts.id = c.id) Total runtime: 0.332 ms(31 rows)\nOn Thu, Nov 21, 2013 at 12:38 PM, desmodemone <[email protected]> wrote:\nCould you please attache the plan with explain buffers verbose?thank you2013/11/21 Robert DiFalco <[email protected]>\nUNION and subselect both performed better than EXISTS for this particular case.\nOn Thu, Nov 21, 2013 at 12:31 PM, desmodemone <[email protected]> wrote:\nHi Robert, could you try with \"exists\" ?SELECT c.* FROM contacts c\nWHERE  exists  ( SELECT  1 FROM phone p WHERE p.addr =? and  p.contact_id=c.id )\nOR exists (SELECT  1 FROM email e WHERE e.addr = ? and  e.contact_id=c.id );2013/11/21 Robert DiFalco <[email protected]>\nI have found this:SELECT c.* FROM contacts cWHERE c.id IN ( SELECT p.contact_id FROM phone p WHERE p.addr = ? )\nOR c.id IN (SELECT e.contact_id FROM email e WHERE e.addr = ? );\nTo have a worse plan than:SELECT * FROM contacts where id IN (( SELECT c.id FROM contacts c JOIN phone p ON c.id = p.contact_id AND p.addr = ? \nUNIONSELECT c.id FROM contacts c JOIN email e ON c.id = e.contact_id AND e.addr = ? );\nMaybe this is no surprise. But after discovering this my question is this, is there another option I dont' know about that is logically the same that can perform even better than the UNION?", "msg_date": "Thu, 21 Nov 2013 13:12:36 -0800", "msg_from": "Robert DiFalco <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "Hmmmm...I'm not sure why the buffers option didn't work for me, maybe the\nheroku psql is out of date. No, the query gets slower with a high load of\ndata and runs pretty often.\n\nI just created a small test dataset for this. When I have a larger one I\nwill post new explain plans but the timings seem pretty consistent\nregardless of the results returns (usually only 2-200) even when there are\nmillions of records in \"contacts\", \"phone_numbers\", and \"email_addresses\".\n\nIn this case doesn't the correlated query have to do more work and access\nmore columns than the subselect approach?\n\n\nOn Thu, Nov 21, 2013 at 1:22 PM, Elliot <[email protected]> wrote:\n\n> On 2013-11-21 16:12, Robert DiFalco wrote:\n>\n> Sorry I couldn't get buffers to work but here is the explain analyze\n> verbose:\n>\n> dft1fjfv106r48=> explain analyze verbose select c.*\n>\n> from contacts c\n>\n> where c.id IN (\n>\n> select\n> p.contact_id\n>\n> from phone_numbers p\n>\n> where (p.national = 5038904993 and p.e164 = '\n> +15038904993'))\n> or c.id IN (\n>\n> select\n> e.contact_id\n>\n> from email_addresses e\n>\n> where e.email = '[email protected]')\n>\n> ;\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on public.contacts c (cost=8.12..75.73 rows=1988 width=95)\n> (actual time=0.410..0.410 rows=0 loops=1)\n> Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call,\n> c.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n> Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n> Rows Removed by Filter: 2849\n> SubPlan 1\n> -> Index Scan using idx_phone_address on public.phone_numbers p\n> (cost=0.06..4.06 rows=1 width=8) (actual time=0.015..0.015 rows=0 loops=1)\n> Output: p.contact_id\n> Index Cond: ((p.\"national\" = 5038904993::bigint) AND\n> ((p.e164)::text = '+15038904993'::text))\n> SubPlan 2\n> -> Index Scan using idx_email_address on public.email_addresses e\n> (cost=0.06..4.06 rows=1 width=8) (actual time=0.018..0.018 rows=0 loops=1)\n> Output: e.contact_id\n> Index Cond: ((e.email)::text = '[email protected]'::text)\n> Total runtime: 0.489 ms\n> (13 rows)\n>\n> dft1fjfv106r48=> explain analyze verbose select c.*\n>\n> from contacts c\n>\n> where exists(\n>\n> select 1\n>\n> from\n> phone_numbers p\n>\n> where (p.national = 5038904993 and p.e164 = '+15038904993') and\n> p.contact_id = c.id)\n> or EXISTS(\n>\n> select 1\n>\n> from\n> email_addresses e\n>\n> where e.email = '[email protected]' and e.contact_id = c.id)\n>\n> ;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on public.contacts c (cost=0.00..21596.38 rows=1988 width=95)\n> (actual time=0.479..0.479 rows=0 loops=1)\n> Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call,\n> c.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n> Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) OR\n> (alternatives: SubPlan 3 or hashed SubPlan 4))\n> Rows Removed by Filter: 2849\n> SubPlan 1\n> -> Index Scan using idx_phone_address on public.phone_numbers p\n> (cost=0.06..4.06 rows=1 width=0) (never executed)\n> Index Cond: ((p.\"national\" = 5038904993::bigint) AND\n> ((p.e164)::text = '+15038904993'::text))\n> Filter: (p.contact_id = c.id)\n> SubPlan 2\n> -> Index Scan using idx_phone_address on public.phone_numbers p_1\n> (cost=0.06..4.06 rows=1 width=8) (actual time=0.010..0.010 rows=0 loops=1)\n> Output: p_1.contact_id\n> Index Cond: ((p_1.\"national\" = 5038904993::bigint) AND\n> ((p_1.e164)::text = '+15038904993'::text))\n> SubPlan 3\n> -> Index Scan using idx_email_address on public.email_addresses e\n> (cost=0.06..4.06 rows=1 width=0) (never executed)\n> Index Cond: ((e.email)::text = '[email protected]'::text)\n> Filter: (e.contact_id = c.id)\n> SubPlan 4\n> -> Index Scan using idx_email_address on public.email_addresses e_1\n> (cost=0.06..4.06 rows=1 width=8) (actual time=0.016..0.016 rows=0 loops=1)\n> Output: e_1.contact_id\n> Index Cond: ((e_1.email)::text = '[email protected]\n> '::text)\n> Total runtime: 0.559 ms\n> (21 rows)\n>\n> dft1fjfv106r48=> explain analyze verbose select * from contacts where id\n> IN (\n> (select c.id\n>\n> from contacts c\n>\n> join phone_numbers p on\n> c.id = p.contact_id and p.national = 5038904993 and p.e164 = '+15038904993')\n> union\n> (select c.id\n>\n> from contacts c\n>\n> join email_addresses e on c.id = e.contact_id\n> and e.email = '[email protected]'));\n>\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=16.31..24.39 rows=2 width=95) (actual\n> time=0.060..0.060 rows=0 loops=1)\n> Output: contacts.id, contacts.owner_id, contacts.user_id,\n> contacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype,\n> contacts.blocked, contacts.details_hash, contacts.fname, contacts.lname,\n> contacts.fb_id\n> -> Unique (cost=16.26..16.26 rows=2 width=8) (actual\n> time=0.057..0.057 rows=0 loops=1)\n> Output: c.id\n> -> Sort (cost=16.26..16.26 rows=2 width=8) (actual\n> time=0.055..0.055 rows=0 loops=1)\n> Output: c.id\n> Sort Key: c.id\n> Sort Method: quicksort Memory: 25kB\n> -> Append (cost=0.11..16.25 rows=2 width=8) (actual\n> time=0.034..0.034 rows=0 loops=1)\n> -> Nested Loop (cost=0.11..8.12 rows=1 width=8)\n> (actual time=0.013..0.013 rows=0 loops=1)\n> Output: c.id\n> -> Index Scan using idx_phone_address on\n> public.phone_numbers p (cost=0.06..4.06 rows=1 width=8) (actual\n> time=0.011..0.011 rows=0 loops=1)\n> Output: p.id, p.contact_id,\n> p.\"national\", p.e164, p.raw_number\n> Index Cond: ((p.\"national\" =\n> 5038904993::bigint) AND ((p.e164)::text = '+15038904993'::text))\n> -> Index Only Scan using\n> idx_contacts_pkey_owner on public.contacts c (cost=0.06..4.06 rows=1\n> width=8) (never executed)\n> Output: c.id, c.owner_id, c.user_id\n> Index Cond: (c.id = p.contact_id)\n> Heap Fetches: 0\n> -> Nested Loop (cost=0.11..8.12 rows=1 width=8)\n> (actual time=0.018..0.018 rows=0 loops=1)\n> Output: c_1.id\n> -> Index Scan using idx_email_address on\n> public.email_addresses e (cost=0.06..4.06 rows=1 width=8) (actual\n> time=0.016..0.016 rows=0 loops=1)\n> Output: e.id, e.contact_id, e.email\n> Index Cond: ((e.email)::text = '\n> [email protected]'::text)\n> -> Index Only Scan using\n> idx_contacts_pkey_owner on public.contacts c_1 (cost=0.06..4.06 rows=1\n> width=8) (never executed)\n> Output: c_1.id, c_1.owner_id, c_1.user_id\n> Index Cond: (c_1.id = e.contact_id)\n> Heap Fetches: 0\n> -> Index Scan using idx_contacts_pkey_owner on public.contacts\n> (cost=0.06..4.06 rows=1 width=95) (never executed)\n> Output: contacts.id, contacts.owner_id, contacts.user_id,\n> contacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype,\n> contacts.blocked, contacts.details_hash, contacts.fname, contacts.lname,\n> contacts.fb_id\n> Index Cond: (contacts.id = c.id)\n> Total runtime: 0.332 ms\n> (31 rows)\n>\n>\n>\n> The buffers option is 9.0+ and is used like \"explain (analyze, verbose,\n> buffers) select 1\".\n> To your original question, the union output there runs slightly faster\n> than the \"in\" approach, although this may not be a good example - your\n> inputs don't return any data, so this might not be realistic - and those\n> numbers are so low that the difference might just be noise.\n> Are you tuning a <0.5ms-runtime query? Or is this just curiosity? FWIW I\n> tend to write queries like this using an exists check first, then if that's\n> still not good enough (all things like proper indexing taken in to account)\n> I'll try an in check, then finally a union if that's still not good enough.\n>\n>\n\nHmmmm...I'm not sure why the buffers option didn't work for me, maybe the heroku psql is out of date. No, the query gets slower with a high load of data and runs pretty often.I just created a small test dataset for this. When I have a larger one I will post new explain plans but the timings seem pretty consistent regardless of the results returns (usually only 2-200) even when there are millions of records in \"contacts\", \"phone_numbers\", and \"email_addresses\".\nIn this case doesn't the correlated query have to do more work and access more columns than the subselect approach?On Thu, Nov 21, 2013 at 1:22 PM, Elliot <[email protected]> wrote:\n\n\nOn 2013-11-21 16:12, Robert DiFalco\n wrote:\n\n\nSorry I couldn't get buffers to work but here is\n the explain analyze verbose:\n \n\n\ndft1fjfv106r48=> explain analyze verbose select c.*  \n                                                            \n                                                      from\n contacts c                                                  \n                                                            \n                                          where c.id IN (  \n                                                            \n                                                            \n                                  select p.contact_id        \n                                                            \n                                                            \n                    from phone_numbers p                    \n                                                            \n                                                            \n       where (p.national = 5038904993 and p.e164 =\n '+15038904993'))                                            \n                                                           or\n c.id IN (  \n                                                            \n                                                            \n                                     select e.contact_id    \n                                                            \n                                                            \n                        from email_addresses e              \n                                                            \n                                                            \n           where e.email = '[email protected]')\n                                                            \n                                                            \n  ;\n                                                         \n            QUERY PLAN                                      \n                               \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c  (cost=8.12..75.73\n rows=1988 width=95) (actual time=0.410..0.410 rows=0\n loops=1)\n   Output: c.id,\n c.owner_id, c.user_id, c.device_id, c.last_call,\n c.record_id, c.dtype, c.blocked, c.details_hash, c.fname,\n c.lname, c.fb_id\n   Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n   Rows Removed by Filter: 2849\n   SubPlan 1\n     ->  Index Scan using idx_phone_address on\n public.phone_numbers p  (cost=0.06..4.06 rows=1 width=8)\n (actual time=0.015..0.015 rows=0 loops=1)\n           Output: p.contact_id\n           Index Cond: ((p.\"national\" =\n 5038904993::bigint) AND ((p.e164)::text =\n '+15038904993'::text))\n   SubPlan 2\n     ->  Index Scan using idx_email_address on\n public.email_addresses e  (cost=0.06..4.06 rows=1 width=8)\n (actual time=0.018..0.018 rows=0 loops=1)\n           Output: e.contact_id\n           Index Cond: ((e.email)::text = '[email protected]'::text)\n Total runtime: 0.489 ms\n(13 rows)\n\n\ndft1fjfv106r48=> explain analyze verbose select c.*  \n                                                            \n                                                      from\n contacts c                                                  \n                                                            \n                                          where exists(      \n                                                            \n                                                            \n                                select 1                    \n                                                            \n                                                            \n                   from phone_numbers p                      \n                                                            \n                                                            \n     where (p.national = 5038904993 and p.e164 =\n '+15038904993') and p.contact_id = c.id)      \n                                                            \n             or EXISTS(                                      \n                                                            \n                                                            \n   select 1                                                  \n                                                            \n                                                 from\n email_addresses e                                          \n                                                            \n                                           where e.email = '[email protected]'\n and e.contact_id = c.id)                              \n                                                            \n        ;\n                                                         \n             QUERY PLAN                                      \n                                \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c  (cost=0.00..21596.38\n rows=1988 width=95) (actual time=0.479..0.479 rows=0\n loops=1)\n   Output: c.id,\n c.owner_id, c.user_id, c.device_id, c.last_call,\n c.record_id, c.dtype, c.blocked, c.details_hash, c.fname,\n c.lname, c.fb_id\n   Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2)\n OR (alternatives: SubPlan 3 or hashed SubPlan 4))\n   Rows Removed by Filter: 2849\n   SubPlan 1\n     ->  Index Scan using idx_phone_address on\n public.phone_numbers p  (cost=0.06..4.06 rows=1 width=0)\n (never executed)\n           Index Cond: ((p.\"national\" =\n 5038904993::bigint) AND ((p.e164)::text =\n '+15038904993'::text))\n           Filter: (p.contact_id = c.id)\n   SubPlan 2\n     ->  Index Scan using idx_phone_address on\n public.phone_numbers p_1  (cost=0.06..4.06 rows=1 width=8)\n (actual time=0.010..0.010 rows=0 loops=1)\n           Output: p_1.contact_id\n           Index Cond: ((p_1.\"national\" =\n 5038904993::bigint) AND ((p_1.e164)::text =\n '+15038904993'::text))\n   SubPlan 3\n     ->  Index Scan using idx_email_address on\n public.email_addresses e  (cost=0.06..4.06 rows=1 width=0)\n (never executed)\n           Index Cond: ((e.email)::text = '[email protected]'::text)\n           Filter: (e.contact_id = c.id)\n   SubPlan 4\n     ->  Index Scan using idx_email_address on\n public.email_addresses e_1  (cost=0.06..4.06 rows=1 width=8)\n (actual time=0.016..0.016 rows=0 loops=1)\n           Output: e_1.contact_id\n           Index Cond: ((e_1.email)::text = '[email protected]'::text)\n Total runtime: 0.559 ms\n\n (21 rows)\n\n\ndft1fjfv106r48=> explain analyze verbose select * from\n contacts where id IN (                                      \n                                                    (select c.id        \n                                                            \n                                                            \n                           from contacts c                  \n                                                            \n                                                            \n              join phone_numbers p on c.id =\n p.contact_id and p.national = 5038904993 and p.e164 =\n '+15038904993')                                            \n                       union (select c.id        \n                                                            \n                                                            \n                     from contacts c                        \n                                                            \n                                                            \n        join email_addresses e on c.id = e.contact_id and e.email = '[email protected]'));\n                                                            \n                                                            \n                                                            \n          QUERY PLAN                                        \n                                                            \n          \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=16.31..24.39 rows=2 width=95) (actual\n time=0.060..0.060 rows=0 loops=1)\n   Output: contacts.id,\n contacts.owner_id, contacts.user_id, contacts.device_id,\n contacts.last_call, contacts.record_id, contacts.dtype,\n contacts.blocked, contacts.details_hash, contacts.fname,\n contacts.lname, contacts.fb_id\n   ->  Unique  (cost=16.26..16.26 rows=2 width=8)\n (actual time=0.057..0.057 rows=0 loops=1)\n         Output: c.id\n         ->  Sort  (cost=16.26..16.26 rows=2 width=8)\n (actual time=0.055..0.055 rows=0 loops=1)\n               Output: c.id\n               Sort Key: c.id\n               Sort Method: quicksort  Memory: 25kB\n               ->  Append  (cost=0.11..16.25 rows=2\n width=8) (actual time=0.034..0.034 rows=0 loops=1)\n                     ->  Nested Loop  (cost=0.11..8.12\n rows=1 width=8) (actual time=0.013..0.013 rows=0 loops=1)\n                           Output: c.id\n                           ->  Index Scan using\n idx_phone_address on public.phone_numbers p\n  (cost=0.06..4.06 rows=1 width=8) (actual time=0.011..0.011\n rows=0 loops=1)\n                                 Output: p.id,\n p.contact_id, p.\"national\", p.e164, p.raw_number\n                                 Index Cond:\n ((p.\"national\" = 5038904993::bigint) AND ((p.e164)::text =\n '+15038904993'::text))\n                           ->  Index Only Scan using\n idx_contacts_pkey_owner on public.contacts c\n  (cost=0.06..4.06 rows=1 width=8) (never executed)\n                                 Output: c.id,\n c.owner_id, c.user_id\n                                 Index Cond: (c.id =\n p.contact_id)\n                                 Heap Fetches: 0\n                     ->  Nested Loop  (cost=0.11..8.12\n rows=1 width=8) (actual time=0.018..0.018 rows=0 loops=1)\n                           Output: c_1.id\n                           ->  Index Scan using\n idx_email_address on public.email_addresses e\n  (cost=0.06..4.06 rows=1 width=8) (actual time=0.016..0.016\n rows=0 loops=1)\n                                 Output: e.id,\n e.contact_id, e.email\n                                 Index Cond:\n ((e.email)::text = '[email protected]'::text)\n                           ->  Index Only Scan using\n idx_contacts_pkey_owner on public.contacts c_1\n  (cost=0.06..4.06 rows=1 width=8) (never executed)\n                                 Output: c_1.id,\n c_1.owner_id, c_1.user_id\n                                 Index Cond: (c_1.id =\n e.contact_id)\n                                 Heap Fetches: 0\n   ->  Index Scan using idx_contacts_pkey_owner on\n public.contacts  (cost=0.06..4.06 rows=1 width=95) (never\n executed)\n         Output: contacts.id,\n contacts.owner_id, contacts.user_id, contacts.device_id,\n contacts.last_call, contacts.record_id, contacts.dtype,\n contacts.blocked, contacts.details_hash, contacts.fname,\n contacts.lname, contacts.fb_id\n         Index Cond: (contacts.id = c.id)\n Total runtime: 0.332 ms\n(31 rows)\n\n\n\n\n\n\n\n\n\n\n The buffers option is 9.0+ and is used like \"explain (analyze,\n verbose, buffers) select 1\".\n To your original question, the union output there runs slightly\n faster than the \"in\" approach, although this may not be a good\n example - your inputs don't return any data, so this might not be\n realistic - and those numbers are so low that the difference might\n just be noise.\n Are you tuning a <0.5ms-runtime query? Or is this just curiosity?\n FWIW I tend to write queries like this using an exists check first,\n then if that's still not good enough (all things like proper\n indexing taken in to account) I'll try an in check, then finally a\n union if that's still not good enough.", "msg_date": "Thu, 21 Nov 2013 14:04:02 -0800", "msg_from": "Robert DiFalco <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "On Thu, Nov 21, 2013 at 2:04 PM, Robert DiFalco <[email protected]>wrote:\n\n> Hmmmm...I'm not sure why the buffers option didn't work for me, maybe the\n> heroku psql is out of date.\n>\n\nDid you enclose it in brackets? Eg. \"EXPLAIN (ANALYZE, BUFFERS) SELECT ...\"\n\nOn Thu, Nov 21, 2013 at 2:04 PM, Robert DiFalco <[email protected]> wrote:\nHmmmm...I'm not sure why the buffers option didn't work for me, maybe the heroku psql is out of date.\nDid you enclose it in brackets? Eg. \"EXPLAIN (ANALYZE, BUFFERS) SELECT ...\"", "msg_date": "Thu, 21 Nov 2013 14:58:55 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "On Thu, Nov 21, 2013 at 2:58 PM, bricklen <[email protected]> wrote:\n\n>\n> Did you enclose it in brackets? Eg. \"EXPLAIN (ANALYZE, BUFFERS) SELECT ...\"\n>\n\nNever mind, I see it further down. My apologies.\n\nOn Thu, Nov 21, 2013 at 2:58 PM, bricklen <[email protected]> wrote:\nDid you enclose it in brackets? Eg. \"EXPLAIN (ANALYZE, BUFFERS) SELECT ...\"\nNever mind, I see it further down. My apologies.", "msg_date": "Thu, 21 Nov 2013 14:59:42 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "No I didn't, thank you. I missed the parens.\n\n\nOn Thu, Nov 21, 2013 at 2:58 PM, bricklen <[email protected]> wrote:\n\n> On Thu, Nov 21, 2013 at 2:04 PM, Robert DiFalco <[email protected]>wrote:\n>\n>> Hmmmm...I'm not sure why the buffers option didn't work for me, maybe the\n>> heroku psql is out of date.\n>>\n>\n> Did you enclose it in brackets? Eg. \"EXPLAIN (ANALYZE, BUFFERS) SELECT ...\"\n>\n\nNo I didn't, thank you.  I missed the parens.On Thu, Nov 21, 2013 at 2:58 PM, bricklen <[email protected]> wrote:\nOn Thu, Nov 21, 2013 at 2:04 PM, Robert DiFalco <[email protected]> wrote:\nHmmmm...I'm not sure why the buffers option didn't work for me, maybe the heroku psql is out of date.\nDid you enclose it in brackets? Eg. \"EXPLAIN (ANALYZE, BUFFERS) SELECT ...\"", "msg_date": "Thu, 21 Nov 2013 15:13:56 -0800", "msg_from": "Robert DiFalco <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "On Thu, Nov 21, 2013 at 2:31 PM, desmodemone <[email protected]> wrote:\n> Hi Robert, could you try with \"exists\" ?\n>\n> SELECT c.*\n> FROM contacts c\n> WHERE exists ( SELECT 1 FROM phone p WHERE p.addr =? and\n> p.contact_id=c.id )\n> OR exists (SELECT 1 FROM email e WHERE e.addr = ? and e.contact_id=c.id );\n\nhm, how about:\nSELECT c.*\nFROM contacts c\nWHERE exists (\n SELECT 1\n FROM phone p\n WHERE p.addr =? AND p.contact_id=c.id\n UNION ALL\n SELECT 1 FROM email e\n WHERE e.addr = ? AND e.contact_id=c.id\n);\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Nov 2013 09:54:29 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION versus SUB SELECT" }, { "msg_contents": "So far that one was the worst in terms of cost and time. Here are all the\nplans with buffers, more records, and results being returned. At this point\nI have good enough performance with my UNION approach but I'm just trying\nto learn now. WHY is the union approach the fastest? I would have expected\nthe EXISTS or IN approaches to be faster or at least have the SAME cost? At\nthis point I just want to understand.\n\ndft1fjfv106r48=> explain (analyze, buffers, verbose)\nselect *\nfrom contacts c\nwhere EXISTS(\n (select 1 from phone_numbers p where c.id = p.contact_id and p.national\n= 5038904993 and p.e164 = '+15038904993')\n union\n (select 1 id = e.contact_id and e.email = '[email protected]'));\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c (cost=0.00..23238.90 rows=1425 width=95)\n(actual time=2.241..46.817 rows=7 loops=1)\n Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call,\nc.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n Filter: (SubPlan 1)\n Rows Removed by Filter: 2843\n Buffers: shared hit=11497\n SubPlan 1\n -> Unique (cost=8.13..8.13 rows=2 width=0) (actual time=0.015..0.015\nrows=0 loops=2850)\n Output: (1)\n Buffers: shared hit=11440\n -> Sort (cost=8.13..8.13 rows=2 width=0) (actual\ntime=0.013..0.013 rows=0 loops=2850)\n Output: (1)\n Sort Key: (1)\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=11440\n -> Append (cost=0.06..8.13 rows=2 width=0) (actual\ntime=0.009..0.009 rows=0 loops=2850)\n Buffers: shared hit=11440\n -> Index Only Scan using idx_phone on\npublic.phone_numbers p (cost=0.06..4.06 rows=1 width=0) (actual\ntime=0.003..0.003 rows=0 loops=2850)\n Output: 1\n Index Cond: ((p.contact_id = c.id) AND\n(p.\"national\" = 5038904993::bigint) AND (p.e164 = '+15038904993'::text))\n Heap Fetches: 11\n Buffers: shared hit=5721\n -> Index Only Scan using idx_email_full on\npublic.email_addresses e (cost=0.06..4.06 rows=1 width=0) (actual\ntime=0.003..0.003 rows=0 loops=2850)\n Output: 1\n Index Cond: ((e.contact_id = c.id) AND\n(e.email = '[email protected]'::text))\n Heap Fetches: 5\n Buffers: shared hit=5719\n Total runtime: 46.897 ms\n(27 rows)\n\ndft1fjfv106r48=> explain (analyze, buffers, verbose)\nselect *\nfrom contacts where id IN (\n (select c.id\n from contacts c join phone_numbers p on c.id = p.contact_id and\np.national = 5038904993 and p.e164 = '+15038904993')\n union\n (select c.id from contacts c join email_addresses e on c.id =\ne.contact_id and e.email = '[email protected]'));\n\n QUERY PLAN\n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=29.38..53.74 rows=6 width=95) (actual time=0.356..0.418\nrows=7 loops=1)\n Output: contacts.id, contacts.owner_id, contacts.user_id,\ncontacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype,\ncontacts.blocked, contacts.details_hash, contacts.fname, contacts.lname,\ncontacts.fb_id\n Buffers: shared hit=87\n -> HashAggregate (cost=29.32..29.34 rows=6 width=8) (actual\ntime=0.347..0.354 rows=7 loops=1)\n Output: c.id\n Buffers: shared hit=66\n -> Append (cost=0.11..29.32 rows=6 width=8) (actual\ntime=0.047..0.316 rows=16 loops=1)\n Buffers: shared hit=66\n -> Nested Loop (cost=0.11..8.12 rows=1 width=8) (actual\ntime=0.045..0.169 rows=11 loops=1)\n Output: c.id\n Buffers: shared hit=43\n -> Index Scan using idx_phone_address on\npublic.phone_numbers p (cost=0.06..4.06 rows=1 width=8) (actual\ntime=0.027..0.047 rows=11 loops=1)\n Output: p.id, p.contact_id, p.\"national\",\np.e164, p.raw_number\n Index Cond: ((p.\"national\" = 5038904993::bigint)\nAND ((p.e164)::text = '+15038904993'::text))\n Buffers: shared hit=9\n -> Index Only Scan using idx_contacts_pkey_owner on\npublic.contacts c (cost=0.06..4.06 rows=1 width=8) (actual\ntime=0.005..0.006 rows=1 loops=11)\n Output: c.id, c.owner_id, c.user_id\n Index Cond: (c.id = p.contact_id)\n Heap Fetches: 11\n Buffers: shared hit=34\n -> Nested Loop (cost=2.12..21.17 rows=5 width=8) (actual\ntime=0.057..0.114 rows=5 loops=1)\n Output: c_1.id\n Buffers: shared hit=23\n -> Bitmap Heap Scan on public.email_addresses e\n (cost=2.06..8.85 rows=5 width=8) (actual time=0.044..0.055 rows=5 loops=1)\n Output: e.id, e.contact_id, e.email\n Recheck Cond: ((e.email)::text = '\[email protected]'::text)\n Buffers: shared hit=7\n -> Bitmap Index Scan on idx_email_address\n (cost=0.00..2.06 rows=5 width=0) (actual time=0.031..0.031 rows=6 loops=1)\n Index Cond: ((e.email)::text = '\[email protected]'::text)\n Buffers: shared hit=2\n -> Index Only Scan using idx_contacts_pkey_owner on\npublic.contacts c_1 (cost=0.06..2.46 rows=1 width=8) (actual\ntime=0.005..0.006 rows=1 loops=5)\n Output: c_1.id, c_1.owner_id, c_1.user_id\n Index Cond: (c_1.id = e.contact_id)\n Heap Fetches: 5\n Buffers: shared hit=16\n -> Index Scan using idx_contacts_pkey_owner on public.contacts\n (cost=0.06..4.06 rows=1 width=95) (actual time=0.003..0.004 rows=1 loops=7)\n Output: contacts.id, contacts.owner_id, contacts.user_id,\ncontacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype,\ncontacts.blocked, contacts.details_hash, contacts.fname, contacts.lname,\ncontacts.fb_id\n Index Cond: (contacts.id = c.id)\n Buffers: shared hit=21\n Total runtime: 0.535 ms\n(40 rows)\n\ndft1fjfv106r48=> explain (analyze, buffers, verbose)\nselect c.*\nfrom contacts c\nwhere EXISTS(select 1 from phone_numbers p where (p.national = 5038904993\nand p.e164 = '+15038904993') and p.contact_id = c.id)\n or EXISTS(select 1 from email_addresses e where e.email = '\[email protected]' and e.contact_id = c.id);\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c (cost=0.00..23213.25 rows=2138 width=95)\n(actual time=0.209..1.290 rows=7 loops=1)\n Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call,\nc.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) OR (alternatives:\nSubPlan 3 or hashed SubPlan 4))\n Rows Removed by Filter: 2843\n Buffers: shared hit=73\n SubPlan 1\n -> Index Only Scan using idx_phone on public.phone_numbers p\n (cost=0.06..4.06 rows=1 width=0) (never executed)\n Index Cond: ((p.contact_id = c.id) AND (p.\"national\" =\n5038904993::bigint) AND (p.e164 = '+15038904993'::text))\n Heap Fetches: 0\n SubPlan 2\n -> Index Scan using idx_phone_address on public.phone_numbers p_1\n (cost=0.06..4.06 rows=1 width=8) (actual time=0.033..0.056 rows=11 loops=1)\n Output: p_1.contact_id\n Index Cond: ((p_1.\"national\" = 5038904994::bigint) AND\n((p_1.e164)::text = '+15038904993'::text))\n Buffers: shared hit=9\n SubPlan 3\n -> Index Only Scan using idx_email_full on public.email_addresses e\n (cost=0.06..4.06 rows=1 width=0) (never executed)\n Index Cond: ((e.contact_id = c.id) AND (e.email = '\[email protected]'::text))\n Heap Fetches: 0\n SubPlan 4\n -> Bitmap Heap Scan on public.email_addresses e_1 (cost=2.06..8.85\nrows=5 width=8) (actual time=0.040..0.050 rows=5 loops=1)\n Output: e_1.contact_id\n Recheck Cond: ((e_1.email)::text = '[email protected]'::text)\n Buffers: shared hit=7\n -> Bitmap Index Scan on idx_email_address (cost=0.00..2.06\nrows=5 width=0) (actual time=0.030..0.030 rows=6 loops=1)\n Index Cond: ((e_1.email)::text = '[email protected]\n'::text)\n Buffers: shared hit=2\n Total runtime: 1.395 ms\n(27 rows)\n\ndft1fjfv106r48=> explain (analyze, buffers, verbose)\nselect c.*\nfrom contacts c\nwhere c.id IN (\n select p.contact_id from phone_numbers p where (p.national = 5038904993\nand p.e164 = '+15038904993'))\nor c.id IN (\n select e.contact_id from email_addresses e where e.email = '\[email protected]');\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.contacts c (cost=12.92..81.32 rows=2138 width=95)\n(actual time=0.208..1.283 rows=7 loops=1)\n Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call,\nc.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id\n Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n Rows Removed by Filter: 2843\n Buffers: shared hit=73\n SubPlan 1\n -> Index Scan using idx_phone_address on public.phone_numbers p\n (cost=0.06..4.06 rows=1 width=8) (actual time=0.032..0.054 rows=11 loops=1)\n Output: p.contact_id\n Index Cond: ((p.\"national\" = 5038904993::bigint) AND\n((p.e164)::text = '+15038904993'::text))\n Buffers: shared hit=9\n SubPlan 2\n -> Bitmap Heap Scan on public.email_addresses e (cost=2.06..8.85\nrows=5 width=8) (actual time=0.040..0.049 rows=5 loops=1)\n Output: e.contact_id\n Recheck Cond: ((e.email)::text = '[email protected]'::text)\n Buffers: shared hit=7\n -> Bitmap Index Scan on idx_email_address (cost=0.00..2.06\nrows=5 width=0) (actual time=0.031..0.031 rows=6 loops=1)\n Index Cond: ((e.email)::text = '[email protected]\n'::text)\n Buffers: shared hit=2\n Total runtime: 1.371 ms\n(19 rows)\n\ndft1fjfv106r48=>\n\n\nOn Fri, Nov 22, 2013 at 7:54 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Nov 21, 2013 at 2:31 PM, desmodemone <[email protected]>\n> wrote:\n> > Hi Robert, could you try with \"exists\" ?\n> >\n> > SELECT c.*\n> > FROM contacts c\n> > WHERE exists ( SELECT 1 FROM phone p WHERE p.addr =? and\n> > p.contact_id=c.id )\n> > OR exists (SELECT 1 FROM email e WHERE e.addr = ? and e.contact_id=\n> c.id );\n>\n> hm, how about:\n> SELECT c.*\n> FROM contacts c\n> WHERE exists (\n> SELECT 1\n> FROM phone p\n> WHERE p.addr =? AND p.contact_id=c.id\n> UNION ALL\n> SELECT 1 FROM email e\n> WHERE e.addr = ? AND e.contact_id=c.id\n> );\n>\n> merlin\n>\n\nSo far that one was the worst in terms of cost and time. Here are all the plans with buffers, more records, and results being returned. At this point I have good enough performance with my UNION approach but I'm just trying to learn now. WHY is the union approach the fastest? I would have expected the EXISTS or IN approaches to be faster or at least have the SAME cost? At this point I just want to understand.\ndft1fjfv106r48=> explain (analyze, buffers, verbose) select * from contacts c where EXISTS(   (select 1 from phone_numbers p where c.id = p.contact_id and p.national = 5038904993 and p.e164 = '+15038904993')\n   union    (select 1 id = e.contact_id and e.email = '[email protected]'));                                                                                QUERY PLAN                                                                                 \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on public.contacts c  (cost=0.00..23238.90 rows=1425 width=95) (actual time=2.241..46.817 rows=7 loops=1)\n   Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call, c.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id   Filter: (SubPlan 1)   Rows Removed by Filter: 2843\n   Buffers: shared hit=11497   SubPlan 1     ->  Unique  (cost=8.13..8.13 rows=2 width=0) (actual time=0.015..0.015 rows=0 loops=2850)           Output: (1)           Buffers: shared hit=11440\n           ->  Sort  (cost=8.13..8.13 rows=2 width=0) (actual time=0.013..0.013 rows=0 loops=2850)                 Output: (1)                 Sort Key: (1)                 Sort Method: quicksort  Memory: 25kB\n                 Buffers: shared hit=11440                 ->  Append  (cost=0.06..8.13 rows=2 width=0) (actual time=0.009..0.009 rows=0 loops=2850)                       Buffers: shared hit=11440\n                       ->  Index Only Scan using idx_phone on public.phone_numbers p  (cost=0.06..4.06 rows=1 width=0) (actual time=0.003..0.003 rows=0 loops=2850)                             Output: 1\n                             Index Cond: ((p.contact_id = c.id) AND (p.\"national\" = 5038904993::bigint) AND (p.e164 = '+15038904993'::text))                             Heap Fetches: 11\n                             Buffers: shared hit=5721                       ->  Index Only Scan using idx_email_full on public.email_addresses e  (cost=0.06..4.06 rows=1 width=0) (actual time=0.003..0.003 rows=0 loops=2850)\n                             Output: 1                             Index Cond: ((e.contact_id = c.id) AND (e.email = '[email protected]'::text))\n                             Heap Fetches: 5                             Buffers: shared hit=5719 Total runtime: 46.897 ms(27 rows)dft1fjfv106r48=> explain (analyze, buffers, verbose) \nselect * from contacts where id IN (  (select c.id  from contacts c join phone_numbers p on c.id = p.contact_id and p.national = 5038904993 and p.e164 = '+15038904993')\n  union   (select c.id from contacts c join email_addresses e on c.id = e.contact_id and e.email = '[email protected]'));\n                                                                                                              QUERY PLAN                                                                                                               \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=29.38..53.74 rows=6 width=95) (actual time=0.356..0.418 rows=7 loops=1)   Output: contacts.id, contacts.owner_id, contacts.user_id, contacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype, contacts.blocked, contacts.details_hash, contacts.fname, contacts.lname, contacts.fb_id\n   Buffers: shared hit=87   ->  HashAggregate  (cost=29.32..29.34 rows=6 width=8) (actual time=0.347..0.354 rows=7 loops=1)         Output: c.id         Buffers: shared hit=66\n         ->  Append  (cost=0.11..29.32 rows=6 width=8) (actual time=0.047..0.316 rows=16 loops=1)               Buffers: shared hit=66               ->  Nested Loop  (cost=0.11..8.12 rows=1 width=8) (actual time=0.045..0.169 rows=11 loops=1)\n                     Output: c.id                     Buffers: shared hit=43                     ->  Index Scan using idx_phone_address on public.phone_numbers p  (cost=0.06..4.06 rows=1 width=8) (actual time=0.027..0.047 rows=11 loops=1)\n                           Output: p.id, p.contact_id, p.\"national\", p.e164, p.raw_number                           Index Cond: ((p.\"national\" = 5038904993::bigint) AND ((p.e164)::text = '+15038904993'::text))\n                           Buffers: shared hit=9                     ->  Index Only Scan using idx_contacts_pkey_owner on public.contacts c  (cost=0.06..4.06 rows=1 width=8) (actual time=0.005..0.006 rows=1 loops=11)\n                           Output: c.id, c.owner_id, c.user_id                           Index Cond: (c.id = p.contact_id)                           Heap Fetches: 11\n                           Buffers: shared hit=34               ->  Nested Loop  (cost=2.12..21.17 rows=5 width=8) (actual time=0.057..0.114 rows=5 loops=1)                     Output: c_1.id\n                     Buffers: shared hit=23                     ->  Bitmap Heap Scan on public.email_addresses e  (cost=2.06..8.85 rows=5 width=8) (actual time=0.044..0.055 rows=5 loops=1)                           Output: e.id, e.contact_id, e.email\n                           Recheck Cond: ((e.email)::text = '[email protected]'::text)                           Buffers: shared hit=7                           ->  Bitmap Index Scan on idx_email_address  (cost=0.00..2.06 rows=5 width=0) (actual time=0.031..0.031 rows=6 loops=1)\n                                 Index Cond: ((e.email)::text = '[email protected]'::text)                                 Buffers: shared hit=2\n                     ->  Index Only Scan using idx_contacts_pkey_owner on public.contacts c_1  (cost=0.06..2.46 rows=1 width=8) (actual time=0.005..0.006 rows=1 loops=5)                           Output: c_1.id, c_1.owner_id, c_1.user_id\n                           Index Cond: (c_1.id = e.contact_id)                           Heap Fetches: 5                           Buffers: shared hit=16   ->  Index Scan using idx_contacts_pkey_owner on public.contacts  (cost=0.06..4.06 rows=1 width=95) (actual time=0.003..0.004 rows=1 loops=7)\n         Output: contacts.id, contacts.owner_id, contacts.user_id, contacts.device_id, contacts.last_call, contacts.record_id, contacts.dtype, contacts.blocked, contacts.details_hash, contacts.fname, contacts.lname, contacts.fb_id\n         Index Cond: (contacts.id = c.id)         Buffers: shared hit=21 Total runtime: 0.535 ms(40 rows)\ndft1fjfv106r48=> explain (analyze, buffers, verbose) select c.*from contacts cwhere EXISTS(select 1 from phone_numbers p where (p.national = 5038904993 and p.e164 = '+15038904993') and p.contact_id = c.id)\n   or EXISTS(select 1 from email_addresses e where e.email = '[email protected]' and e.contact_id = c.id);                                                                     QUERY PLAN                                                                      \n----------------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on public.contacts c  (cost=0.00..23213.25 rows=2138 width=95) (actual time=0.209..1.290 rows=7 loops=1)\n   Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call, c.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id   Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) OR (alternatives: SubPlan 3 or hashed SubPlan 4))\n   Rows Removed by Filter: 2843   Buffers: shared hit=73   SubPlan 1     ->  Index Only Scan using idx_phone on public.phone_numbers p  (cost=0.06..4.06 rows=1 width=0) (never executed)\n           Index Cond: ((p.contact_id = c.id) AND (p.\"national\" = 5038904993::bigint) AND (p.e164 = '+15038904993'::text))           Heap Fetches: 0   SubPlan 2\n     ->  Index Scan using idx_phone_address on public.phone_numbers p_1  (cost=0.06..4.06 rows=1 width=8) (actual time=0.033..0.056 rows=11 loops=1)           Output: p_1.contact_id           Index Cond: ((p_1.\"national\" = 5038904994::bigint) AND ((p_1.e164)::text = '+15038904993'::text))\n           Buffers: shared hit=9   SubPlan 3     ->  Index Only Scan using idx_email_full on public.email_addresses e  (cost=0.06..4.06 rows=1 width=0) (never executed)           Index Cond: ((e.contact_id = c.id) AND (e.email = '[email protected]'::text))\n           Heap Fetches: 0   SubPlan 4     ->  Bitmap Heap Scan on public.email_addresses e_1  (cost=2.06..8.85 rows=5 width=8) (actual time=0.040..0.050 rows=5 loops=1)           Output: e_1.contact_id\n           Recheck Cond: ((e_1.email)::text = '[email protected]'::text)           Buffers: shared hit=7           ->  Bitmap Index Scan on idx_email_address  (cost=0.00..2.06 rows=5 width=0) (actual time=0.030..0.030 rows=6 loops=1)\n                 Index Cond: ((e_1.email)::text = '[email protected]'::text)                 Buffers: shared hit=2 Total runtime: 1.395 ms\n(27 rows)dft1fjfv106r48=> explain (analyze, buffers, verbose) select c.*from contacts cwhere c.id IN (    select p.contact_id from phone_numbers p where (p.national = 5038904993 and p.e164 = '+15038904993'))\nor c.id IN (    select e.contact_id from email_addresses e where e.email = '[email protected]');                                                                    QUERY PLAN                                                                     \n--------------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on public.contacts c  (cost=12.92..81.32 rows=2138 width=95) (actual time=0.208..1.283 rows=7 loops=1)\n   Output: c.id, c.owner_id, c.user_id, c.device_id, c.last_call, c.record_id, c.dtype, c.blocked, c.details_hash, c.fname, c.lname, c.fb_id   Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n   Rows Removed by Filter: 2843   Buffers: shared hit=73   SubPlan 1     ->  Index Scan using idx_phone_address on public.phone_numbers p  (cost=0.06..4.06 rows=1 width=8) (actual time=0.032..0.054 rows=11 loops=1)\n           Output: p.contact_id           Index Cond: ((p.\"national\" = 5038904993::bigint) AND ((p.e164)::text = '+15038904993'::text))           Buffers: shared hit=9\n   SubPlan 2     ->  Bitmap Heap Scan on public.email_addresses e  (cost=2.06..8.85 rows=5 width=8) (actual time=0.040..0.049 rows=5 loops=1)           Output: e.contact_id           Recheck Cond: ((e.email)::text = '[email protected]'::text)\n           Buffers: shared hit=7           ->  Bitmap Index Scan on idx_email_address  (cost=0.00..2.06 rows=5 width=0) (actual time=0.031..0.031 rows=6 loops=1)                 Index Cond: ((e.email)::text = '[email protected]'::text)\n                 Buffers: shared hit=2 Total runtime: 1.371 ms(19 rows)dft1fjfv106r48=> On Fri, Nov 22, 2013 at 7:54 AM, Merlin Moncure <[email protected]> wrote:\nOn Thu, Nov 21, 2013 at 2:31 PM, desmodemone <[email protected]> wrote:\n\n> Hi Robert, could you try with \"exists\" ?\n>\n> SELECT c.*\n> FROM contacts c\n> WHERE  exists  ( SELECT  1 FROM phone p WHERE p.addr =? and\n> p.contact_id=c.id )\n> OR exists (SELECT  1 FROM email e WHERE e.addr = ? and  e.contact_id=c.id );\n\nhm, how about:\nSELECT c.*\nFROM contacts c\nWHERE  exists  (\n  SELECT  1\n  FROM phone p\n  WHERE p.addr =? AND p.contact_id=c.id\n  UNION ALL\n  SELECT  1 FROM email e\n  WHERE e.addr = ? AND e.contact_id=c.id\n);\n\nmerlin", "msg_date": "Fri, 22 Nov 2013 09:36:45 -0800", "msg_from": "Robert DiFalco <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UNION versus SUB SELECT" } ]
[ { "msg_contents": "Hi,\n\nHaving attended a few PGCons, I've always heard the remark from a few\npresenters and attendees that Postgres shouldn't be run inside a VM. That\nbare metal is the only way to go.\n\nHere at work we were entertaining the idea of running our Postgres database\non our VM farm alongside our application vm's. We are planning to run a\nfew Postgres synchronous replication nodes.\n\nWhy shouldn't we run Postgres in a VM? What are the downsides? Does anyone\nhave any metrics or benchmarks with the latest Postgres?\n\nThanks!\n\nLee Nguyen\n\nHi,Having attended a few PGCons, I've always heard the remark from a few presenters and attendees that Postgres shouldn't be run inside a VM. That bare metal is the only way to go.\nHere at work we were entertaining the idea of running our Postgres database on our VM farm alongside our application vm's.  We are planning to run a few Postgres synchronous replication nodes.\nWhy shouldn't we run Postgres in a VM?  What are the downsides? Does anyone have any metrics or benchmarks with the latest Postgres?Thanks!\nLee Nguyen", "msg_date": "Mon, 25 Nov 2013 15:01:53 -0500", "msg_from": "Lee Nguyen <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql in a Virtual Machine" }, { "msg_contents": "On 26/11/13 09:01, Lee Nguyen wrote:\n> Hi,\n>\n> Having attended a few PGCons, I've always heard the remark from a few \n> presenters and attendees that Postgres shouldn't be run inside a VM. \n> That bare metal is the only way to go.\n>\n> Here at work we were entertaining the idea of running our Postgres \n> database on our VM farm alongside our application vm's. We are \n> planning to run a few Postgres synchronous replication nodes.\n>\n> Why shouldn't we run Postgres in a VM? What are the downsides? Does \n> anyone have any metrics or benchmarks with the latest Postgres?\n>\n> Thanks!\n>\n> Lee Nguyen\nI suspect that it is a performance and reliability issue that affects \nany ACID database.\n\nAFAIK, in a VM there is less certainty as to when a disk I/O is actually \ncomplete and safely on the disk.\n\nI think vm's are probably fine for testing, but not for production.\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 09:08:14 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On 25.11.2013 22:01, Lee Nguyen wrote:\n> Hi,\n>\n> Having attended a few PGCons, I've always heard the remark from a few\n> presenters and attendees that Postgres shouldn't be run inside a VM. That\n> bare metal is the only way to go.\n>\n> Here at work we were entertaining the idea of running our Postgres database\n> on our VM farm alongside our application vm's. We are planning to run a\n> few Postgres synchronous replication nodes.\n>\n> Why shouldn't we run Postgres in a VM? What are the downsides? Does anyone\n> have any metrics or benchmarks with the latest Postgres?\n\nI've also heard people say that they've seen PostgreSQL to perform worse \nin a VM. In the performance testing that we've done in VMware, though, \nwe haven't seen any big impact. So I guess the answer is that it depends \non the specific configuration of CPU, memory, disks and the software. \nSynchronous replication is likely going to be the biggest bottleneck by \nfar, unless it's mostly read-only. I don't know if virtualization will \nhave a measurable impact on network latency, which is what matters for \nsynchronous replication.\n\nSo, I'd suggest that you try it yourself, and see how it performs. And \nplease report back to the list, I'd also love to see some numbers!\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Nov 2013 22:19:05 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "\nOn 11/25/2013 03:19 PM, Heikki Linnakangas wrote:\n> On 25.11.2013 22:01, Lee Nguyen wrote:\n>> Hi,\n>>\n>> Having attended a few PGCons, I've always heard the remark from a few\n>> presenters and attendees that Postgres shouldn't be run inside a VM. \n>> That\n>> bare metal is the only way to go.\n>>\n>> Here at work we were entertaining the idea of running our Postgres \n>> database\n>> on our VM farm alongside our application vm's. We are planning to run a\n>> few Postgres synchronous replication nodes.\n>>\n>> Why shouldn't we run Postgres in a VM? What are the downsides? Does \n>> anyone\n>> have any metrics or benchmarks with the latest Postgres?\n>\n> I've also heard people say that they've seen PostgreSQL to perform \n> worse in a VM. In the performance testing that we've done in VMware, \n> though, we haven't seen any big impact. So I guess the answer is that \n> it depends on the specific configuration of CPU, memory, disks and the \n> software. Synchronous replication is likely going to be the biggest \n> bottleneck by far, unless it's mostly read-only. I don't know if \n> virtualization will have a measurable impact on network latency, which \n> is what matters for synchronous replication.\n>\n> So, I'd suggest that you try it yourself, and see how it performs. And \n> please report back to the list, I'd also love to see some numbers!\n>\n>\n\n\nYeah, and there are large numbers of public and/or private cloud-based \nofferings out there (from Amazon RDS, Heroku, EnterpriseDB and VMware \namong others.) Pretty much all of these are VM based, and can be \nsuitable for many workloads.\n\nMaybe the advice is a bit out of date.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Nov 2013 15:28:36 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "Hi!\nWe have virtualized several hundreds of production databases, mostly Oracle and DB2 but a few postgres as well, and we have seen a very positive effect in doing this.\nWe might loose a little bit in virtualization overhead but have gained a lot in flexibility and managebility.\n\nMy tips are to make sure to optimize where you have I/O and don't over provision cpu cores to VM's. Use paravirtualized drivers where you can and use fast storage and network to gain what you loose in virtualization overhead in those areas. \n\nI would also make sure to check that the hypervisor does write to permanent storage before returning to the VM with acknowledgement. \n\nAnd yes, the idea that databases and virtualization does not match, is not a reality to us anymore. It works well for most use cases.\n\n\nBest regards, Martin\n\n> 25 nov 2013 kl. 21:30 skrev \"Andrew Dunstan\" <[email protected]>:\n> \n> \n>> On 11/25/2013 03:19 PM, Heikki Linnakangas wrote:\n>>> On 25.11.2013 22:01, Lee Nguyen wrote:\n>>> Hi,\n>>> \n>>> Having attended a few PGCons, I've always heard the remark from a few\n>>> presenters and attendees that Postgres shouldn't be run inside a VM. That\n>>> bare metal is the only way to go.\n>>> \n>>> Here at work we were entertaining the idea of running our Postgres database\n>>> on our VM farm alongside our application vm's. We are planning to run a\n>>> few Postgres synchronous replication nodes.\n>>> \n>>> Why shouldn't we run Postgres in a VM? What are the downsides? Does anyone\n>>> have any metrics or benchmarks with the latest Postgres?\n>> \n>> I've also heard people say that they've seen PostgreSQL to perform worse in a VM. In the performance testing that we've done in VMware, though, we haven't seen any big impact. So I guess the answer is that it depends on the specific configuration of CPU, memory, disks and the software. Synchronous replication is likely going to be the biggest bottleneck by far, unless it's mostly read-only. I don't know if virtualization will have a measurable impact on network latency, which is what matters for synchronous replication.\n>> \n>> So, I'd suggest that you try it yourself, and see how it performs. And please report back to the list, I'd also love to see some numbers!\n> \n> \n> Yeah, and there are large numbers of public and/or private cloud-based offerings out there (from Amazon RDS, Heroku, EnterpriseDB and VMware among others.) Pretty much all of these are VM based, and can be suitable for many workloads.\n> \n> Maybe the advice is a bit out of date.\n> \n> cheers\n> \n> andrew\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Nov 2013 21:00:31 +0000", "msg_from": "\"Gudmundsson Martin (mg)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On 26/11/13 09:28, Andrew Dunstan wrote:\n>\n> On 11/25/2013 03:19 PM, Heikki Linnakangas wrote:\n>> On 25.11.2013 22:01, Lee Nguyen wrote:\n>>> Hi,\n>>>\n>>> Having attended a few PGCons, I've always heard the remark from a few\n>>> presenters and attendees that Postgres shouldn't be run inside a VM.\n>>> That\n>>> bare metal is the only way to go.\n>>>\n>>> Here at work we were entertaining the idea of running our Postgres\n>>> database\n>>> on our VM farm alongside our application vm's. We are planning to run a\n>>> few Postgres synchronous replication nodes.\n>>>\n>>> Why shouldn't we run Postgres in a VM? What are the downsides? Does\n>>> anyone\n>>> have any metrics or benchmarks with the latest Postgres?\n>>\n>> I've also heard people say that they've seen PostgreSQL to perform\n>> worse in a VM. In the performance testing that we've done in VMware,\n>> though, we haven't seen any big impact. So I guess the answer is that\n>> it depends on the specific configuration of CPU, memory, disks and the\n>> software. Synchronous replication is likely going to be the biggest\n>> bottleneck by far, unless it's mostly read-only. I don't know if\n>> virtualization will have a measurable impact on network latency, which\n>> is what matters for synchronous replication.\n>>\n>> So, I'd suggest that you try it yourself, and see how it performs. And\n>> please report back to the list, I'd also love to see some numbers!\n>>\n>>\n>\n>\n> Yeah, and there are large numbers of public and/or private cloud-based\n> offerings out there (from Amazon RDS, Heroku, EnterpriseDB and VMware\n> among others.) Pretty much all of these are VM based, and can be\n> suitable for many workloads.\n>\n> Maybe the advice is a bit out of date.\n>\n\nAgreed.\n\nPossibly years ago the maturity of various virtualization layers was \nsuch that the advice was sound. But these days it seems that provided \nsome reading is done (so you understand for instance how to make writes \ngo to the hosting hardware), it should be fine.\n\nWe make use of many KVM guest VMs on usually Ubuntu and the IO \nperformance is pretty indistinguishable from bare metal. In some tests \nwe did notice that VMs with >8 cpus tended to stop scaling so we are \nusing more smaller VMs rather than fewer big ones [1].\n\nregards\n\nMark\n\n[1] This was with Pgbench. Note this was over a year ago, so this effect \nmay be not present (different kernels and kvm versions), or the magic \nnumber may be higher than 8 now...\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 10:22:35 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Mon, Nov 25, 2013 at 2:01 PM, Lee Nguyen <[email protected]> wrote:\n> Hi,\n>\n> Having attended a few PGCons, I've always heard the remark from a few\n> presenters and attendees that Postgres shouldn't be run inside a VM. That\n> bare metal is the only way to go.\n>\n> Here at work we were entertaining the idea of running our Postgres database\n> on our VM farm alongside our application vm's. We are planning to run a few\n> Postgres synchronous replication nodes.\n>\n> Why shouldn't we run Postgres in a VM? What are the downsides? Does anyone\n> have any metrics or benchmarks with the latest Postgres?\n\nUnfortunately (and it really pains me to say this) we live in an\nincreasingly virtualized world and we just have to go ahead and deal\nwith it. I work at a mid cap company and we have a zero tolerance\npolicy in terms of applications targeting hardware: in short, you\ncan't. VMs have downsides: you get less performance per buck and have\nanother thing to fail but the administration advantages are compelling\nespecially for large environments. Furthermore, for any size company\nit makes less sense to run your own data center with each passing day;\nthe cloud providers are really bringing up their game. This is\neconomic specialization at work.\n\n(but, as always, take regular backups of everything you do that is valuable)\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Nov 2013 16:50:13 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Mon, 25 Nov 2013, Merlin Moncure wrote:\n\n> On Mon, Nov 25, 2013 at 2:01 PM, Lee Nguyen <[email protected]> wrote:\n>> Hi,\n>>\n>> Having attended a few PGCons, I've always heard the remark from a few\n>> presenters and attendees that Postgres shouldn't be run inside a VM. That\n>> bare metal is the only way to go.\n>>\n>> Here at work we were entertaining the idea of running our Postgres database\n>> on our VM farm alongside our application vm's. We are planning to run a few\n>> Postgres synchronous replication nodes.\n>>\n>> Why shouldn't we run Postgres in a VM? What are the downsides? Does anyone\n>> have any metrics or benchmarks with the latest Postgres?\n>\n> Unfortunately (and it really pains me to say this) we live in an\n> increasingly virtualized world and we just have to go ahead and deal\n> with it. I work at a mid cap company and we have a zero tolerance\n> policy in terms of applications targeting hardware: in short, you\n> can't. VMs have downsides: you get less performance per buck and have\n> another thing to fail but the administration advantages are compelling\n> especially for large environments. Furthermore, for any size company\n> it makes less sense to run your own data center with each passing day;\n> the cloud providers are really bringing up their game. This is\n> economic specialization at work.\n\nbeing pedantic, you can get almost all the management benefits on bare metal, \nand you can rent bare metal from hosting providors, cloud VMs are not the only \noption. 'Cloud' makes sense if you have a very predictably spiky load and you \ncan add/remove machines to meet that load, but if you end up needing to have the \nmachines running a significant percentage of the time, dedicated boxes are \ncheaper (as well as faster)\n\nDavid Lang\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Nov 2013 14:57:10 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Mon, Nov 25, 2013 at 4:57 PM, David Lang <[email protected]> wrote:\n> On Mon, 25 Nov 2013, Merlin Moncure wrote:\n>\n>> On Mon, Nov 25, 2013 at 2:01 PM, Lee Nguyen <[email protected]> wrote:\n>>>\n>>> Hi,\n>>>\n>>> Having attended a few PGCons, I've always heard the remark from a few\n>>> presenters and attendees that Postgres shouldn't be run inside a VM. That\n>>> bare metal is the only way to go.\n>>>\n>>> Here at work we were entertaining the idea of running our Postgres\n>>> database\n>>> on our VM farm alongside our application vm's. We are planning to run a\n>>> few\n>>> Postgres synchronous replication nodes.\n>>>\n>>> Why shouldn't we run Postgres in a VM? What are the downsides? Does\n>>> anyone\n>>> have any metrics or benchmarks with the latest Postgres?\n>>\n>>\n>> Unfortunately (and it really pains me to say this) we live in an\n>> increasingly virtualized world and we just have to go ahead and deal\n>> with it. I work at a mid cap company and we have a zero tolerance\n>> policy in terms of applications targeting hardware: in short, you\n>> can't. VMs have downsides: you get less performance per buck and have\n>> another thing to fail but the administration advantages are compelling\n>> especially for large environments. Furthermore, for any size company\n>> it makes less sense to run your own data center with each passing day;\n>> the cloud providers are really bringing up their game. This is\n>> economic specialization at work.\n>\n>\n> being pedantic, you can get almost all the management benefits on bare\n> metal, and you can rent bare metal from hosting providors, cloud VMs are not\n> the only option. 'Cloud' makes sense if you have a very predictably spiky\n> load and you can add/remove machines to meet that load, but if you end up\n> needing to have the machines running a significant percentage of the time,\n> dedicated boxes are cheaper (as well as faster)\n\nWell, that depends on how you define 'most'. The thing is for me is\nthat for machines around the office (just like with people) about 10%\nof them do 90% of the work. Being able to slide them around based on\nthat (sometime changing) need is a tremendous time and cost saver.\nFor application and infrastructure development dealing with hardware\nis just a distraction. I'd rather click on some interface and say,\n'this application needs 25k iops guaranteed' and then make a cost\ndriven decision on software optimization. It's hard to let go after\ndecades of hardware innovation (the SSD revolution was the final shoe\nto drop) but for me the time has finally come. As recently as a year\nago I was arguing databases needed to be run against metal.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Nov 2013 17:26:21 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "We have been running several Postgres databases on VMs for the last 9\nmonths. The largest one currently has a few hundreds of millions of rows\n(~1.5T of data, ~100G of frequently queried data ) and performs at ~1000\ntps. Most of our transactions are part of a 2PC, which effectively results\nto high I/O as asynchronous commit is disabled.\n\nMain benefits so far:\n\n- ESXi HA makes high availability completely transparent and reduces the\nnumber of failover servers (we're running N+1 clusters)\n\n- Our projects' load can often miss our expectations, and it changes over\nthe time. Scaling up/down has helped us cope.\n\n- Live relocation of databases helps with hardware upgrades and spreading\nof load.\n\nMain issues:\n\n- We are not overprovisioning at all (using virtualization exclusively for\nthe management benefits), so we don't know its impact to performance.\n\n- I/O has often been a bottleneck. We are not certain whether this is due\nto the impact of virtualization or due to mistakes in our sizing and\n configuration. So far we have been coping by spreading the load across\nmore spindles and by increasing the memory.\n\n\n\n\n\nOn Tue, Nov 26, 2013 at 1:26 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Mon, Nov 25, 2013 at 4:57 PM, David Lang <[email protected]> wrote:\n> > On Mon, 25 Nov 2013, Merlin Moncure wrote:\n> >\n> >> On Mon, Nov 25, 2013 at 2:01 PM, Lee Nguyen <[email protected]>\n> wrote:\n> >>>\n> >>> Hi,\n> >>>\n> >>> Having attended a few PGCons, I've always heard the remark from a few\n> >>> presenters and attendees that Postgres shouldn't be run inside a VM.\n> That\n> >>> bare metal is the only way to go.\n> >>>\n> >>> Here at work we were entertaining the idea of running our Postgres\n> >>> database\n> >>> on our VM farm alongside our application vm's. We are planning to run\n> a\n> >>> few\n> >>> Postgres synchronous replication nodes.\n> >>>\n> >>> Why shouldn't we run Postgres in a VM? What are the downsides? Does\n> >>> anyone\n> >>> have any metrics or benchmarks with the latest Postgres?\n> >>\n> >>\n> >> Unfortunately (and it really pains me to say this) we live in an\n> >> increasingly virtualized world and we just have to go ahead and deal\n> >> with it. I work at a mid cap company and we have a zero tolerance\n> >> policy in terms of applications targeting hardware: in short, you\n> >> can't. VMs have downsides: you get less performance per buck and have\n> >> another thing to fail but the administration advantages are compelling\n> >> especially for large environments. Furthermore, for any size company\n> >> it makes less sense to run your own data center with each passing day;\n> >> the cloud providers are really bringing up their game. This is\n> >> economic specialization at work.\n> >\n> >\n> > being pedantic, you can get almost all the management benefits on bare\n> > metal, and you can rent bare metal from hosting providors, cloud VMs are\n> not\n> > the only option. 'Cloud' makes sense if you have a very predictably spiky\n> > load and you can add/remove machines to meet that load, but if you end up\n> > needing to have the machines running a significant percentage of the\n> time,\n> > dedicated boxes are cheaper (as well as faster)\n>\n> Well, that depends on how you define 'most'. The thing is for me is\n> that for machines around the office (just like with people) about 10%\n> of them do 90% of the work. Being able to slide them around based on\n> that (sometime changing) need is a tremendous time and cost saver.\n> For application and infrastructure development dealing with hardware\n> is just a distraction. I'd rather click on some interface and say,\n> 'this application needs 25k iops guaranteed' and then make a cost\n> driven decision on software optimization. It's hard to let go after\n> decades of hardware innovation (the SSD revolution was the final shoe\n> to drop) but for me the time has finally come. As recently as a year\n> ago I was arguing databases needed to be run against metal.\n>\n> merlin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWe have been running several Postgres databases on VMs for the last 9 months. The largest one currently has a few hundreds of millions of rows (~1.5T of data, ~100G of frequently queried data ) and performs at ~1000 tps. Most of our transactions are part of a 2PC, which effectively results to high I/O as asynchronous commit is disabled. \nMain benefits so far:- ESXi HA makes high availability completely transparent and reduces the number of failover servers (we're running N+1 clusters)\n- Our projects' load can often miss our expectations, and it changes over the time. Scaling up/down has helped us cope.- Live relocation of databases helps with hardware upgrades and spreading of load.\nMain issues:- We are not overprovisioning at all (using virtualization exclusively for the management benefits), so we don't know its impact to performance.\n- I/O has often been a bottleneck. We are not certain whether this is due to the impact of virtualization or due to mistakes in our sizing and  configuration. So far we have been coping by spreading the load across more spindles and by increasing the memory.\nOn Tue, Nov 26, 2013 at 1:26 AM, Merlin Moncure <[email protected]> wrote:\nOn Mon, Nov 25, 2013 at 4:57 PM, David Lang <[email protected]> wrote:\n\n> On Mon, 25 Nov 2013, Merlin Moncure wrote:\n>\n>> On Mon, Nov 25, 2013 at 2:01 PM, Lee Nguyen <[email protected]> wrote:\n>>>\n>>> Hi,\n>>>\n>>> Having attended a few PGCons, I've always heard the remark from a few\n>>> presenters and attendees that Postgres shouldn't be run inside a VM. That\n>>> bare metal is the only way to go.\n>>>\n>>> Here at work we were entertaining the idea of running our Postgres\n>>> database\n>>> on our VM farm alongside our application vm's.  We are planning to run a\n>>> few\n>>> Postgres synchronous replication nodes.\n>>>\n>>> Why shouldn't we run Postgres in a VM?  What are the downsides? Does\n>>> anyone\n>>> have any metrics or benchmarks with the latest Postgres?\n>>\n>>\n>> Unfortunately (and it really pains me to say this) we live in an\n>> increasingly virtualized world and we just have to go ahead and deal\n>> with it.  I work at a mid cap company and we have a zero tolerance\n>> policy in terms of applications targeting hardware: in short, you\n>> can't.  VMs have downsides: you get less performance per buck and have\n>> another thing to fail but the administration advantages are compelling\n>> especially for large environments.  Furthermore, for any size company\n>> it makes less sense to run your own data center with each passing day;\n>> the cloud providers are really bringing up their game. This is\n>> economic specialization at work.\n>\n>\n> being pedantic, you can get almost all the management benefits on bare\n> metal, and you can rent bare metal from hosting providors, cloud VMs are not\n> the only option. 'Cloud' makes sense if you have a very predictably spiky\n> load and you can add/remove machines to meet that load, but if you end up\n> needing to have the machines running a significant percentage of the time,\n> dedicated boxes are cheaper (as well as faster)\n\nWell, that depends on how you define 'most'.  The thing is for me is\nthat for machines around the office (just like with people) about 10%\nof them do 90% of the work.  Being able to slide them around based on\nthat (sometime changing) need is a tremendous time and cost saver.\nFor application and infrastructure development dealing with hardware\nis just a distraction.   I'd rather click on some interface and say,\n'this application needs 25k iops guaranteed' and then make a cost\ndriven decision on software optimization.  It's hard to let go after\ndecades of hardware innovation (the SSD revolution was the final shoe\nto drop) but for me the time has finally come.  As recently as a year\nago I was arguing databases needed to be run against metal.\n\nmerlin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 26 Nov 2013 08:12:50 +0200", "msg_from": "Xenofon Papadopoulos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Tue, 26 Nov 2013, Xenofon Papadopoulos wrote:\n\n> We have been running several Postgres databases on VMs for the last 9\n> months. The largest one currently has a few hundreds of millions of rows\n> (~1.5T of data, ~100G of frequently queried data ) and performs at ~1000\n> tps. Most of our transactions are part of a 2PC, which effectively results\n> to high I/O as asynchronous commit is disabled.\n>\n> Main benefits so far:\n>\n> - ESXi HA makes high availability completely transparent and reduces the\n> number of failover servers (we're running N+1 clusters)\n>\n> - Our projects' load can often miss our expectations, and it changes over\n> the time. Scaling up/down has helped us cope.\n\nhow do you add another server without having to do a massive data copy in the \nprocess?\n\nDavid Lang\n\n> - Live relocation of databases helps with hardware upgrades and spreading\n> of load.\n>\n> Main issues:\n>\n> - We are not overprovisioning at all (using virtualization exclusively for\n> the management benefits), so we don't know its impact to performance.\n>\n> - I/O has often been a bottleneck. We are not certain whether this is due\n> to the impact of virtualization or due to mistakes in our sizing and\n> configuration. So far we have been coping by spreading the load across\n> more spindles and by increasing the memory.\n>\n>\n>\n>\n>\n> On Tue, Nov 26, 2013 at 1:26 AM, Merlin Moncure <[email protected]> wrote:\n>\n>> On Mon, Nov 25, 2013 at 4:57 PM, David Lang <[email protected]> wrote:\n>>> On Mon, 25 Nov 2013, Merlin Moncure wrote:\n>>>\n>>>> On Mon, Nov 25, 2013 at 2:01 PM, Lee Nguyen <[email protected]>\n>> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> Having attended a few PGCons, I've always heard the remark from a few\n>>>>> presenters and attendees that Postgres shouldn't be run inside a VM.\n>> That\n>>>>> bare metal is the only way to go.\n>>>>>\n>>>>> Here at work we were entertaining the idea of running our Postgres\n>>>>> database\n>>>>> on our VM farm alongside our application vm's. We are planning to run\n>> a\n>>>>> few\n>>>>> Postgres synchronous replication nodes.\n>>>>>\n>>>>> Why shouldn't we run Postgres in a VM? What are the downsides? Does\n>>>>> anyone\n>>>>> have any metrics or benchmarks with the latest Postgres?\n>>>>\n>>>>\n>>>> Unfortunately (and it really pains me to say this) we live in an\n>>>> increasingly virtualized world and we just have to go ahead and deal\n>>>> with it. I work at a mid cap company and we have a zero tolerance\n>>>> policy in terms of applications targeting hardware: in short, you\n>>>> can't. VMs have downsides: you get less performance per buck and have\n>>>> another thing to fail but the administration advantages are compelling\n>>>> especially for large environments. Furthermore, for any size company\n>>>> it makes less sense to run your own data center with each passing day;\n>>>> the cloud providers are really bringing up their game. This is\n>>>> economic specialization at work.\n>>>\n>>>\n>>> being pedantic, you can get almost all the management benefits on bare\n>>> metal, and you can rent bare metal from hosting providors, cloud VMs are\n>> not\n>>> the only option. 'Cloud' makes sense if you have a very predictably spiky\n>>> load and you can add/remove machines to meet that load, but if you end up\n>>> needing to have the machines running a significant percentage of the\n>> time,\n>>> dedicated boxes are cheaper (as well as faster)\n>>\n>> Well, that depends on how you define 'most'. The thing is for me is\n>> that for machines around the office (just like with people) about 10%\n>> of them do 90% of the work. Being able to slide them around based on\n>> that (sometime changing) need is a tremendous time and cost saver.\n>> For application and infrastructure development dealing with hardware\n>> is just a distraction. I'd rather click on some interface and say,\n>> 'this application needs 25k iops guaranteed' and then make a cost\n>> driven decision on software optimization. It's hard to let go after\n>> decades of hardware innovation (the SSD revolution was the final shoe\n>> to drop) but for me the time has finally come. As recently as a year\n>> ago I was arguing databases needed to be run against metal.\n>>\n>> merlin\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Nov 2013 22:16:02 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "Which scenario do you have in mind? We don't add servers to scale out, we\nonly scale up in each single project. We use SAN for storage, so if we need\nto increase disk space we provide more through LVM.\n\nThere is one case we need to move data around, when we relocate projects\nover to new storage (eg to reduce the load on the SAN). There is\nsignificant data copy involved, but as it's done in parallel with our live\noperations and doesn't cause noticeable performance drop it hasn't been an\nissue so far.\n\n\nOn Tue, Nov 26, 2013 at 8:16 AM, David Lang <[email protected]> wrote:\n\n> On Tue, 26 Nov 2013, Xenofon Papadopoulos wrote:\n>\n> We have been running several Postgres databases on VMs for the last 9\n>> months. The largest one currently has a few hundreds of millions of rows\n>> (~1.5T of data, ~100G of frequently queried data ) and performs at ~1000\n>> tps. Most of our transactions are part of a 2PC, which effectively results\n>> to high I/O as asynchronous commit is disabled.\n>>\n>> Main benefits so far:\n>>\n>> - ESXi HA makes high availability completely transparent and reduces the\n>> number of failover servers (we're running N+1 clusters)\n>>\n>> - Our projects' load can often miss our expectations, and it changes over\n>> the time. Scaling up/down has helped us cope.\n>>\n>\n> how do you add another server without having to do a massive data copy in\n> the process?\n>\n> David Lang\n>\n>\n> - Live relocation of databases helps with hardware upgrades and spreading\n>> of load.\n>>\n>> Main issues:\n>>\n>> - We are not overprovisioning at all (using virtualization exclusively for\n>> the management benefits), so we don't know its impact to performance.\n>>\n>> - I/O has often been a bottleneck. We are not certain whether this is due\n>> to the impact of virtualization or due to mistakes in our sizing and\n>> configuration. So far we have been coping by spreading the load across\n>> more spindles and by increasing the memory.\n>>\n>>\n>>\n>>\n>>\n>> On Tue, Nov 26, 2013 at 1:26 AM, Merlin Moncure <[email protected]>\n>> wrote:\n>>\n>> On Mon, Nov 25, 2013 at 4:57 PM, David Lang <[email protected]> wrote:\n>>>\n>>>> On Mon, 25 Nov 2013, Merlin Moncure wrote:\n>>>>\n>>>> On Mon, Nov 25, 2013 at 2:01 PM, Lee Nguyen <[email protected]>\n>>>>>\n>>>> wrote:\n>>>\n>>>>\n>>>>>> Hi,\n>>>>>>\n>>>>>> Having attended a few PGCons, I've always heard the remark from a few\n>>>>>> presenters and attendees that Postgres shouldn't be run inside a VM.\n>>>>>>\n>>>>> That\n>>>\n>>>> bare metal is the only way to go.\n>>>>>>\n>>>>>> Here at work we were entertaining the idea of running our Postgres\n>>>>>> database\n>>>>>> on our VM farm alongside our application vm's. We are planning to run\n>>>>>>\n>>>>> a\n>>>\n>>>> few\n>>>>>> Postgres synchronous replication nodes.\n>>>>>>\n>>>>>> Why shouldn't we run Postgres in a VM? What are the downsides? Does\n>>>>>> anyone\n>>>>>> have any metrics or benchmarks with the latest Postgres?\n>>>>>>\n>>>>>\n>>>>>\n>>>>> Unfortunately (and it really pains me to say this) we live in an\n>>>>> increasingly virtualized world and we just have to go ahead and deal\n>>>>> with it. I work at a mid cap company and we have a zero tolerance\n>>>>> policy in terms of applications targeting hardware: in short, you\n>>>>> can't. VMs have downsides: you get less performance per buck and have\n>>>>> another thing to fail but the administration advantages are compelling\n>>>>> especially for large environments. Furthermore, for any size company\n>>>>> it makes less sense to run your own data center with each passing day;\n>>>>> the cloud providers are really bringing up their game. This is\n>>>>> economic specialization at work.\n>>>>>\n>>>>\n>>>>\n>>>> being pedantic, you can get almost all the management benefits on bare\n>>>> metal, and you can rent bare metal from hosting providors, cloud VMs are\n>>>>\n>>> not\n>>>\n>>>> the only option. 'Cloud' makes sense if you have a very predictably\n>>>> spiky\n>>>> load and you can add/remove machines to meet that load, but if you end\n>>>> up\n>>>> needing to have the machines running a significant percentage of the\n>>>>\n>>> time,\n>>>\n>>>> dedicated boxes are cheaper (as well as faster)\n>>>>\n>>>\n>>> Well, that depends on how you define 'most'. The thing is for me is\n>>> that for machines around the office (just like with people) about 10%\n>>> of them do 90% of the work. Being able to slide them around based on\n>>> that (sometime changing) need is a tremendous time and cost saver.\n>>> For application and infrastructure development dealing with hardware\n>>> is just a distraction. I'd rather click on some interface and say,\n>>> 'this application needs 25k iops guaranteed' and then make a cost\n>>> driven decision on software optimization. It's hard to let go after\n>>> decades of hardware innovation (the SSD revolution was the final shoe\n>>> to drop) but for me the time has finally come. As recently as a year\n>>> ago I was arguing databases needed to be run against metal.\n>>>\n>>> merlin\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.\n>>> org)\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>>\n>>\n\nWhich scenario do you have in mind? We don't add servers to scale out, we only scale up in each single project. We use SAN for storage, so if we need to increase disk space we provide more through LVM.\nThere is one case we need to move data around, when we relocate projects over to new storage (eg to reduce the load on the SAN). There is significant data copy involved, but as it's done in parallel with our live operations and doesn't cause noticeable performance drop it hasn't been an issue so far.\nOn Tue, Nov 26, 2013 at 8:16 AM, David Lang <[email protected]> wrote:\nOn Tue, 26 Nov 2013, Xenofon Papadopoulos wrote:\n\n\nWe have been running several Postgres databases on VMs for the last 9\nmonths. The largest one currently has a few hundreds of millions of rows\n(~1.5T of data, ~100G of frequently queried data ) and performs at ~1000\ntps. Most of our transactions are part of a 2PC, which effectively results\nto high I/O as asynchronous commit is disabled.\n\nMain benefits so far:\n\n- ESXi HA makes high availability completely transparent and reduces the\nnumber of failover servers (we're running N+1 clusters)\n\n- Our projects' load can often miss our expectations, and it changes over\nthe time. Scaling up/down has helped us cope.\n\n\nhow do you add another server without having to do a massive data copy in the process?\n\nDavid Lang\n\n\n- Live relocation of databases helps with hardware upgrades and spreading\nof load.\n\nMain issues:\n\n- We are not overprovisioning at all (using virtualization exclusively for\nthe management benefits), so we don't know its impact to performance.\n\n- I/O has often been a bottleneck. We are not certain whether this is due\nto the impact of virtualization or due to mistakes in our sizing and\nconfiguration. So far we have been coping by spreading the load across\nmore spindles and by increasing the memory.\n\n\n\n\n\nOn Tue, Nov 26, 2013 at 1:26 AM, Merlin Moncure <[email protected]> wrote:\n\n\nOn Mon, Nov 25, 2013 at 4:57 PM, David Lang <[email protected]> wrote:\n\nOn Mon, 25 Nov 2013, Merlin Moncure wrote:\n\n\nOn Mon, Nov 25, 2013 at 2:01 PM, Lee Nguyen <[email protected]>\n\nwrote:\n\n\nHi,\n\nHaving attended a few PGCons, I've always heard the remark from a few\npresenters and attendees that Postgres shouldn't be run inside a VM.\n\nThat\n\n\nbare metal is the only way to go.\n\nHere at work we were entertaining the idea of running our Postgres\ndatabase\non our VM farm alongside our application vm's.  We are planning to run\n\na\n\n\nfew\nPostgres synchronous replication nodes.\n\nWhy shouldn't we run Postgres in a VM?  What are the downsides? Does\nanyone\nhave any metrics or benchmarks with the latest Postgres?\n\n\n\nUnfortunately (and it really pains me to say this) we live in an\nincreasingly virtualized world and we just have to go ahead and deal\nwith it.  I work at a mid cap company and we have a zero tolerance\npolicy in terms of applications targeting hardware: in short, you\ncan't.  VMs have downsides: you get less performance per buck and have\nanother thing to fail but the administration advantages are compelling\nespecially for large environments.  Furthermore, for any size company\nit makes less sense to run your own data center with each passing day;\nthe cloud providers are really bringing up their game. This is\neconomic specialization at work.\n\n\n\nbeing pedantic, you can get almost all the management benefits on bare\nmetal, and you can rent bare metal from hosting providors, cloud VMs are\n\nnot\n\nthe only option. 'Cloud' makes sense if you have a very predictably spiky\nload and you can add/remove machines to meet that load, but if you end up\nneeding to have the machines running a significant percentage of the\n\ntime,\n\ndedicated boxes are cheaper (as well as faster)\n\n\nWell, that depends on how you define 'most'.  The thing is for me is\nthat for machines around the office (just like with people) about 10%\nof them do 90% of the work.  Being able to slide them around based on\nthat (sometime changing) need is a tremendous time and cost saver.\nFor application and infrastructure development dealing with hardware\nis just a distraction.   I'd rather click on some interface and say,\n'this application needs 25k iops guaranteed' and then make a cost\ndriven decision on software optimization.  It's hard to let go after\ndecades of hardware innovation (the SSD revolution was the final shoe\nto drop) but for me the time has finally come.  As recently as a year\nago I was arguing databases needed to be run against metal.\n\nmerlin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 26 Nov 2013 08:30:31 +0200", "msg_from": "Xenofon Papadopoulos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 11/25/2013 09:01 PM, Lee Nguyen wrote:\n> Hi,\n> \n> Having attended a few PGCons, I've always heard the remark from a \n> few presenters and attendees that Postgres shouldn't be run inside \n> a VM. That bare metal is the only way to go.\n> \n[........]\n\nHello\n\nThis was true some years ago. In our experience, this is not true\nanymore if you are not running a very demanding system that will be a\nchallenge even running on metal. It should work well for most use\ncases if your infrastructure is configured correctly.\n\nThis year we have moved all our postgreSQL servers (45+) to a VMware\ncluster running vSphere 5.1. We are also almost finished moving all\nour oracle databases to this cluster too. More than 100 virtual\nservers and some thousands databases are running without problems in\nour VM environment.\n\nIn our experience, VMware vSphere 5.1 makes a huge different in IO\nperformance compared to older versions. Our tests against a storage\nsolution connected to vm servers and metal servers last year, did not\nshow any particular difference in performance between them. Some tips:\n\n* We use a SAN via Fibre Channel to storage our data. Be sure to have\nenough active FC channels for your load. Do not even think to use NFS\nto connect your physical nodes to your SAN.\n\n* We are using 10GigE to interconnect the physical nodes in our\ncluster. This helps a lot when moving VM servers between nodes.\n\n* Don't use in production the snapshot functionality in VM clusters.\n\n* Don't over provision resources, specially memory.\n\n* Use paravirtualized drivers.\n\n* As usual, your storage solution will define the limits in\nperformance of your VM cluster.\n\nWe have gained a lot in flexibility and manageability without losing\nperformance, the benefits in these areas are many when you\nadministrate many servers/databases.\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAlKUbjcACgkQBhuKQurGihTpHQCeIDkjR/BFM61V2ft72BYd2SBr\nsowAnRrscNmByay3KL9iicpGUYcb2hv6\n=Qvey\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 10:47:35 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "2013-11-25 21:19 keltez�ssel, Heikki Linnakangas �rta:\n> On 25.11.2013 22:01, Lee Nguyen wrote:\n>> Hi,\n>>\n>> Having attended a few PGCons, I've always heard the remark from a few\n>> presenters and attendees that Postgres shouldn't be run inside a VM. That\n>> bare metal is the only way to go.\n>>\n>> Here at work we were entertaining the idea of running our Postgres database\n>> on our VM farm alongside our application vm's. We are planning to run a\n>> few Postgres synchronous replication nodes.\n>>\n>> Why shouldn't we run Postgres in a VM? What are the downsides? Does anyone\n>> have any metrics or benchmarks with the latest Postgres?\n>\n> I've also heard people say that they've seen PostgreSQL to perform worse in a VM. In the \n> performance testing that we've done in VMware, though, we haven't seen any big impact. \n> So I guess the answer is that it depends on the specific configuration of CPU, memory, \n> disks and the software.\n\nWe at Cybertec tested some configurations about 2 months ago.\nThe performance drop is coming from the disk given to the VM guest.\n\nWhen there is a dedicated disk (pass through) given to the VM guest,\nPostgreSQL runs at a speed of around 98% of the bare metal.\n\nWhen the virtual disk is a disk file on the host machine, we've measured\n20% or lower. The host used Fedora 19/x86_64 with IIRC a 3.10.x Linux kernel\nwith EXT4 filesystem (this latter is sure, not IIRC). The effect was observed\nboth under Qemu/KVM and Xen.\n\nThe virtual disk was not pre-allocated, since it was the default setting,\ni.e. space savings preferred over speed. The figure might be better with\na pre-allocated disk but the filesystem journalling done twice (both in the\nhost and the guest) will have an effect.\n\nThe PostgreSQL server versions 9.2.x, 9.3beta were tested with pgbench,\nstandalone, without replication.\n\nBest regards,\nZolt�n B�sz�rm�nyi\n\n> Synchronous replication is likely going to be the biggest bottleneck by far, unless it's \n> mostly read-only. I don't know if virtualization will have a measurable impact on \n> network latency, which is what matters for synchronous replication.\n>\n> So, I'd suggest that you try it yourself, and see how it performs. And please report \n> back to the list, I'd also love to see some numbers!\n>\n> - Heikki\n>\n>\n\n\n-- \n----------------------------------\nZolt�n B�sz�rm�nyi\nCybertec Sch�nig & Sch�nig GmbH\nGr�hrm�hlgasse 26\nA-2700 Wiener Neustadt, Austria\nWeb: http://www.postgresql-support.de\n http://www.postgresql.at/\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 14:51:59 +0100", "msg_from": "Boszormenyi Zoltan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "Zoltan,\n\n* Boszormenyi Zoltan ([email protected]) wrote:\n> When the virtual disk is a disk file on the host machine, we've measured\n> 20% or lower. The host used Fedora 19/x86_64 with IIRC a 3.10.x Linux kernel\n> with EXT4 filesystem (this latter is sure, not IIRC). The effect was observed\n> both under Qemu/KVM and Xen.\n\nInteresting- that's far worse than I would have expected. Was this test\ndone with paravirtualized drivers? If not, I can certainly understand\nthe terrible performance.\n\nIndependently of that, I'll add my own 2c that DB people tend to be\npretty paranoid and the current round of VM technologies out there have\ncaused more than one person to lose data because fsync wasn't honored\nall the way down to the disk. This is especially true of 'home-grown'\nsetups, imv, but I'm sure you could configure the commercial offerings\nto lie to the guest OS too. Of course, there are similar concerns about\na SAN or even local RAID cards, but there's a lot more general\nfamiliarity and history around those which reduces the risk there (or at\nleast, that's the thought).\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 26 Nov 2013 09:19:44 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": ">\n> On 25.11.2013 22:01, Lee Nguyen wrote:\n>>\n>>>\n>>> Why shouldn't we run Postgres in a VM? What are the downsides? Does\n>>> anyone\n>>> have any metrics or benchmarks with the latest Postgres?\n>>>\n>>\nFor those of us with small (a few to a dozen servers), we'd like to get out\nof server maintenance completely. Can anyone with experience on a cloud VM\nsolution comment? Do the VM solutions provided by the major hosting\ncompanies have the same good performance as the VM's that that several have\ndescribed here?\n\nObviously there's Amazon's new Postgres solution available. What else is\nout there in the way of \"instant on\" solutions with Linux/Postgres/Apache\npreconfigured systems? Has anyone used them in production?\n\nThanks,\nCraig\n\n\n\nOn 25.11.2013 22:01, Lee Nguyen wrote:\n\n\nWhy shouldn't we run Postgres in a VM?  What are the downsides? Does anyone\nhave any metrics or benchmarks with the latest Postgres?For those of us with small (a few to a dozen servers), we'd like to get out of server maintenance completely.  Can anyone with experience on a cloud VM solution comment?  Do the VM solutions provided by the major hosting companies have the same good performance as the VM's that that several have described here?\nObviously there's Amazon's new Postgres solution available.  What else is out there in the way of \"instant on\" solutions with Linux/Postgres/Apache preconfigured systems?  Has anyone used them in production?\nThanks,Craig", "msg_date": "Tue, 26 Nov 2013 06:26:10 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "\nOn 11/26/2013 09:26 AM, Craig James wrote:\n>\n> On 25.11.2013 22:01, Lee Nguyen wrote:\n>\n>\n> Why shouldn't we run Postgres in a VM? What are the\n> downsides? Does anyone\n> have any metrics or benchmarks with the latest Postgres?\n>\n>\n> For those of us with small (a few to a dozen servers), we'd like to \n> get out of server maintenance completely. Can anyone with experience \n> on a cloud VM solution comment? Do the VM solutions provided by the \n> major hosting companies have the same good performance as the VM's \n> that that several have described here?\n>\n> Obviously there's Amazon's new Postgres solution available. What else \n> is out there in the way of \"instant on\" solutions with \n> Linux/Postgres/Apache preconfigured systems? Has anyone used them in \n> production?\n>\n>\n\nIf you want a full stack including Postgres, Heroku might be your best \nbet. Depends a bit on your application and your workload. And yes, I've \nused it. Full disclosure: I have done work paid for by Heroku.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 10:31:32 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "\nOn 11/26/2013 08:51 AM, Boszormenyi Zoltan wrote:\n> 2013-11-25 21:19 keltezéssel, Heikki Linnakangas írta:\n>> On 25.11.2013 22:01, Lee Nguyen wrote:\n>>> Hi,\n>>>\n>>> Having attended a few PGCons, I've always heard the remark from a few\n>>> presenters and attendees that Postgres shouldn't be run inside a VM. \n>>> That\n>>> bare metal is the only way to go.\n>>>\n>>> Here at work we were entertaining the idea of running our Postgres \n>>> database\n>>> on our VM farm alongside our application vm's. We are planning to \n>>> run a\n>>> few Postgres synchronous replication nodes.\n>>>\n>>> Why shouldn't we run Postgres in a VM? What are the downsides? Does \n>>> anyone\n>>> have any metrics or benchmarks with the latest Postgres?\n>>\n>> I've also heard people say that they've seen PostgreSQL to perform \n>> worse in a VM. In the performance testing that we've done in VMware, \n>> though, we haven't seen any big impact. So I guess the answer is that \n>> it depends on the specific configuration of CPU, memory, disks and \n>> the software.\n>\n> We at Cybertec tested some configurations about 2 months ago.\n> The performance drop is coming from the disk given to the VM guest.\n>\n> When there is a dedicated disk (pass through) given to the VM guest,\n> PostgreSQL runs at a speed of around 98% of the bare metal.\n>\n> When the virtual disk is a disk file on the host machine, we've measured\n> 20% or lower. The host used Fedora 19/x86_64 with IIRC a 3.10.x Linux \n> kernel\n> with EXT4 filesystem (this latter is sure, not IIRC). The effect was \n> observed\n> both under Qemu/KVM and Xen.\n>\n> The virtual disk was not pre-allocated, since it was the default setting,\n> i.e. space savings preferred over speed. The figure might be better with\n> a pre-allocated disk but the filesystem journalling done twice (both \n> in the\n> host and the guest) will have an effect.\n\n\nNot-pre-allocated disk-file backed is just about the worst case in my \nexperience.\n\nTry pre-allocated VirtIO disks on an LVM volume group - you should get \nmuch better performance.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 10:39:48 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On 11/26/2013 7:26 AM, Craig James wrote:\n>\n> For those of us with small (a few to a dozen servers), we'd like to \n> get out of server maintenance completely. Can anyone with experience \n> on a cloud VM solution comment? Do the VM solutions provided by the \n> major hosting companies have the same good performance as the VM's \n> that that several have described here?\n>\n> Obviously there's Amazon's new Postgres solution available. What else \n> is out there in the way of \"instant on\" solutions with \n> Linux/Postgres/Apache preconfigured systems? Has anyone used them in \n> production?\n\nI've done some work with Heroku and the MySQL flavor of AWS service.\nThey work, and are convenient, but there are a couple of issues :\n\n1. Random odd (and bad) things can happen from a performance perspective \nthat you just need to cope with. e.g. I/O will become vastly slower for \nperiods of 10s of seconds, once or twice a day. If you don't like the \nidea of phenomena like this in your system, beware.\n\n2. Your inability to connect with the bare metal may turn out to be a \nsignificant hassle when trying to understand some performance issue in \nthe future. Tricks that we're used to using such as looking at \"iostat\" \n(or even \"top\") output are no longer usable because the hosting company \nwill not give you a login on the host VM. This limitation extends to \nmany many techniques that have been commonly used in the past and can \nbecome a major headache to the point where you need to reproduce the \nsystem on physical hardware just to understand what's going on with it \n(been there, done that...)\n\nFor the reasons above I would caution deploying a production service \n(today) on a \"SaaS\" database service like Heroku or Amazon RDS.\nRunning your own database inside a stock VM might be better, but it can \nbe hard to get the right kind of I/O for that deployment scenario.\nIn the case of self-hosted VMWare or KVM obviously you have much more \ncontrol and observability.\n\nHeroku had (at least when I last used it, a year ago or so) an \nadditional issue in that they host on AWS VMs so if something goes wrong \nyou are talking to one company that is using another company's virtual \nmachine service. Not a recipe for clarity, good service and hair \nretention...\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 09:29:05 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Tue, Nov 26, 2013 at 8:29 AM, David Boreham <[email protected]>wrote:\n\n> On 11/26/2013 7:26 AM, Craig James wrote:\n>\n>>\n>> For those of us with small (a few to a dozen servers), we'd like to get\n>> out of server maintenance completely. Can anyone with experience on a cloud\n>> VM solution comment? ...\n>>\n>\n> I've done some work with Heroku and the MySQL flavor of AWS service.\n>\n\nThanks, I'll check Heroku out.\n\n\n> For the reasons above I would caution deploying a production service\n> (today) on a \"SaaS\" database service like Heroku or Amazon RDS.\n> Running your own database inside a stock VM might be better, but it can be\n> hard to get the right kind of I/O for that deployment scenario.\n> In the case of self-hosted VMWare or KVM obviously you have much more\n> control and observability.\n>\n\nWell, the whole point of switching to a cloud provider is to get out of the\nbusiness of buying hardware and hauling it down to the co-lo facility.\nAdding VMWare or KVM is just one more thing we'd have to add to our\nsysadmin skills. We'd rather focus on our core technology, the stuff we're\nbetter at than anyone else.\n\nSo far I'm impressed by what I've read about Amazon's Postgres instances.\nMaybe the reality will be disappointing, but (for example) the idea of\nsetting up streaming replication with one click is pretty appealing.\n\nCraig\n\nOn Tue, Nov 26, 2013 at 8:29 AM, David Boreham <[email protected]> wrote:\nOn 11/26/2013 7:26 AM, Craig James wrote:\n\n\nFor those of us with small (a few to a dozen servers), we'd like to get out of server maintenance completely. Can anyone with experience on a cloud VM solution comment?  ...\n\n\nI've done some work with Heroku and the MySQL flavor of AWS service.Thanks, I'll check Heroku out. \n\nFor the reasons above I would caution deploying a production service (today) on a \"SaaS\" database service like Heroku or Amazon RDS.\nRunning your own database inside a stock VM might be better, but it can be hard to get the right kind of I/O for that deployment scenario.\nIn the case of self-hosted VMWare or KVM obviously you have much more control and observability.Well, the whole point of switching to a cloud provider is to get out of the business of buying hardware and hauling it down to the co-lo facility.  Adding VMWare or KVM is just one more thing we'd have to add to our sysadmin skills.  We'd rather focus on our core technology, the stuff we're better at than anyone else.\nSo far I'm impressed by what I've read about Amazon's Postgres instances. Maybe the reality will be disappointing, but (for example) the idea of setting up streaming replication with one click is pretty appealing.\nCraig", "msg_date": "Tue, 26 Nov 2013 09:24:01 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On 11/25/2013 12:01 PM, Lee Nguyen wrote:\n> Hi,\n> \n> Having attended a few PGCons, I've always heard the remark from a few\n> presenters and attendees that Postgres shouldn't be run inside a VM. That\n> bare metal is the only way to go.\n\nThis is pretty dated advice. Early VMs had horrible performance under\nload, which is mostly where this thinking comes from. It's not true\nanymore.\n\nIt *is* true that getting good performance in a virtualized environment\nrequires more tuning than bare metal, because you have to tune the VM\nsystem as well.\n\n> Here at work we were entertaining the idea of running our Postgres database\n> on our VM farm alongside our application vm's. We are planning to run a\n> few Postgres synchronous replication nodes.\n\nBiggest pitfall here is IO performance configuration. I can't give you\nspecific advice without knowing the platform and the desired workload.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 09:31:23 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Tue, Nov 26, 2013 at 11:31 AM, Josh Berkus <[email protected]> wrote:\n> On 11/25/2013 12:01 PM, Lee Nguyen wrote:\n>> Hi,\n>>\n>> Having attended a few PGCons, I've always heard the remark from a few\n>> presenters and attendees that Postgres shouldn't be run inside a VM. That\n>> bare metal is the only way to go.\n>\n> This is pretty dated advice. Early VMs had horrible performance under\n> load, which is mostly where this thinking comes from. It's not true\n> anymore.\n>\n> It *is* true that getting good performance in a virtualized environment\n> requires more tuning than bare metal, because you have to tune the VM\n> system as well.\n>\n>> Here at work we were entertaining the idea of running our Postgres database\n>> on our VM farm alongside our application vm's. We are planning to run a\n>> few Postgres synchronous replication nodes.\n>\n> Biggest pitfall here is IO performance configuration. I can't give you\n> specific advice without knowing the platform and the desired workload.\n\nYeah. Seeing things like provisioned iops in the cloud services is a\npretty big deal. I do think it's still fairly expensive for what you\nget but SSDs and competition is going to force prices down quickly\nover time. For \"in house\" virtualized setups, you can get pretty far\nwith SSDs using any number of options (direct attached to the host,\niscsi etc, SAN etc).\n\nFor I/O constrained systems, I don't consider any spindle based\nsystems, in particular SANs, to be a good investment. Curious: I\njust read your article on iscsi\n(http://it.toolbox.com/blogs/database-soup/the-problem-with-iscsi-30602).\n Do you still consider iscsi to be imperformant?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 11:53:18 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Nov 26, 2013, at 9:24 AM, Craig James wrote:\n\n> So far I'm impressed by what I've read about Amazon's Postgres instances. Maybe the reality will be disappointing, but (for example) the idea of setting up streaming replication with one click is pretty appealing.\n\nWhere did you hear this was an option? When we talked to AWS about their Postgres RDS offering, they were pretty clear that (currently) replication is hardware-based, the slave is not live, and you don't get access to the WALs that they use internally for PITR. Changing that is something they want to address, but isn't there today.\n\nThat said, we use AWS instances to run Postgres, and so long as you use their Provisioned IOPS service for i/o and size your instances appropriately, it's been pretty good. Maybe not the most cost-effective option, but you're paying for the service to not have to worry about stocking spare parts or making sure your hardware is burned in before use. And AWS makes it easy to add regional or even global redundancy, if that's what you want. (Of course that costs even more money, but if you need it, using AWS is a lot easier than finding colos around the world yourself.)\n\nLike many have said, the problem of using VMs for databases is that a lot of VM systems try to over-subscribe the hardware for more savings. That works for a lot of loads but not a busy database. So just make sure your VM isn't doing that to you, and most of the performance argument for avoiding VMs goes away.\nOn Nov 26, 2013, at 9:24 AM, Craig James wrote:So far I'm impressed by what I've read about Amazon's Postgres instances. Maybe the reality will be disappointing, but (for example) the idea of setting up streaming replication with one click is pretty appealing.Where did you hear this was an option? When we talked to AWS about their Postgres RDS offering, they were pretty clear that (currently) replication is hardware-based, the slave is not live, and you don't get access to the WALs that they use internally for PITR. Changing that is something they want to address, but isn't there today.That said, we use AWS instances to run Postgres, and so long as you use their Provisioned IOPS service for i/o and size your instances appropriately, it's been pretty good. Maybe not the most cost-effective option, but you're paying for the service to not have to worry about stocking spare parts or making sure your hardware is burned in before use. And AWS makes it easy to add regional or even global redundancy, if that's what you want. (Of course that costs even more money, but if you need it, using AWS is a lot easier than finding colos around the world yourself.)Like many have said, the problem of using VMs for databases is that a lot of VM systems try to over-subscribe the hardware for more savings. That works for a lot of loads but not a busy database. So just make sure your VM isn't doing that to you, and most of the performance argument for avoiding VMs goes away.", "msg_date": "Tue, 26 Nov 2013 10:40:54 -0800", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Tue, Nov 26, 2013 at 10:40 AM, Ben Chobot <[email protected]> wrote:\n\n> On Nov 26, 2013, at 9:24 AM, Craig James wrote:\n>\n> So far I'm impressed by what I've read about Amazon's Postgres instances.\n> Maybe the reality will be disappointing, but (for example) the idea of\n> setting up streaming replication with one click is pretty appealing.\n>\n>\n> Where did you hear this was an option? When we talked to AWS about their\n> Postgres RDS offering, they were pretty clear that (currently) replication\n> is hardware-based, the slave is not live, and you don't get access to the\n> WALs that they use internally for PITR. Changing that is something they\n> want to address, but isn't there today.\n>\n\nI was guessing from the description of their \"High Availability\" option ...\nbut maybe it uses something like pg-pool, or as you say, maybe they do it\nat the hardware level.\n\nhttp://aws.amazon.com/rds/postgresql/#High-Availability\n\n\n\"Multi-AZ Deployments – This deployment option for your production DB\nInstances enhances database availability while protecting your latest\ndatabase updates against unplanned outages. When you create or modify your\nDB Instance to run as a Multi-AZ deployment, Amazon RDS will automatically\nprovision and manage a “standby” replica in a different Availability Zone\n(independent infrastructure in a physically separate location). Database\nupdates are made concurrently on the primary and standby resources to\nprevent replication lag. In the event of planned database maintenance, DB\nInstance failure, or an Availability Zone failure, Amazon RDS will\nautomatically failover to the up-to-date standby so that database\noperations can resume quickly without administrative intervention. Prior to\nfailover you cannot directly access the standby, and it cannot be used to\nserve read traffic.\"\n\nEither way, if a cold standby is all you need, it's still a one-click\noption, lots simpler than setting it up yourself.\n\nCraig\n\nOn Tue, Nov 26, 2013 at 10:40 AM, Ben Chobot <[email protected]> wrote:\nOn Nov 26, 2013, at 9:24 AM, Craig James wrote:\n\nSo far I'm impressed by what I've read about Amazon's Postgres instances. Maybe the reality will be disappointing, but (for example) the idea of setting up streaming replication with one click is pretty appealing.\nWhere did you hear this was an option? When we talked to AWS about their Postgres RDS offering, they were pretty clear that (currently) replication is hardware-based, the slave is not live, and you don't get access to the WALs that they use internally for PITR. Changing that is something they want to address, but isn't there today.\nI was guessing from the description of their \"High Availability\" option ... but maybe it uses something like pg-pool, or as you say, maybe they do it at the hardware level.\nhttp://aws.amazon.com/rds/postgresql/#High-Availability\n\"Multi-AZ Deployments – This deployment option for your production DB Instances enhances database availability while protecting your latest database updates against unplanned outages. When you create or modify your DB Instance to run as a Multi-AZ deployment, Amazon RDS will automatically provision and manage a “standby” replica in a different Availability Zone (independent infrastructure in a physically separate location). Database updates are made concurrently on the primary and standby resources to prevent replication lag. In the event of planned database maintenance, DB Instance failure, or an Availability Zone failure, Amazon RDS will automatically failover to the up-to-date standby so that database operations can resume quickly without administrative intervention. Prior to failover you cannot directly access the standby, and it cannot be used to serve read traffic.\"\nEither way, if a cold standby is all you need, it's still a one-click option, lots simpler than setting it up yourself.Craig", "msg_date": "Tue, 26 Nov 2013 11:18:41 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Tue, Nov 26, 2013 at 11:18:41AM -0800, Craig James wrote:\n- On Tue, Nov 26, 2013 at 10:40 AM, Ben Chobot <[email protected]> wrote:\n- \n- > On Nov 26, 2013, at 9:24 AM, Craig James wrote:\n- >\n- > So far I'm impressed by what I've read about Amazon's Postgres instances.\n- > Maybe the reality will be disappointing, but (for example) the idea of\n- > setting up streaming replication with one click is pretty appealing.\n- >\n- >\n- > Where did you hear this was an option? When we talked to AWS about their\n- > Postgres RDS offering, they were pretty clear that (currently) replication\n- > is hardware-based, the slave is not live, and you don't get access to the\n- > WALs that they use internally for PITR. Changing that is something they\n- > want to address, but isn't there today.\n- >\n- \n- I was guessing from the description of their \"High Availability\" option ...\n- but maybe it uses something like pg-pool, or as you say, maybe they do it\n- at the hardware level.\n- \n- http://aws.amazon.com/rds/postgresql/#High-Availability\n- \n- \n- \"Multi-AZ Deployments � This deployment option for your production DB\n- Instances enhances database availability while protecting your latest\n- database updates against unplanned outages. When you create or modify your\n- DB Instance to run as a Multi-AZ deployment, Amazon RDS will automatically\n- provision and manage a �standby� replica in a different Availability Zone\n- (independent infrastructure in a physically separate location). Database\n- updates are made concurrently on the primary and standby resources to\n- prevent replication lag. In the event of planned database maintenance, DB\n- Instance failure, or an Availability Zone failure, Amazon RDS will\n- automatically failover to the up-to-date standby so that database\n- operations can resume quickly without administrative intervention. Prior to\n- failover you cannot directly access the standby, and it cannot be used to\n- serve read traffic.\"\n- \n- Either way, if a cold standby is all you need, it's still a one-click\n- option, lots simpler than setting it up yourself.\n- \n- Craig\n\nThe Multi-AZ deployments don't expose the replica to you unless there is a \nfailover. (in which case it picks one and promotes it)\n\nThere is an option for \"Create Read Replica\" but it's currently not available so\nwe can assume that will eventually be an option.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Nov 2013 11:51:04 -0800", "msg_from": "David Kerr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" } ]
[ { "msg_contents": "We have several independent tables on a multi-core machine serving Select\nqueries. These tables fit into memory; and each Select queries goes over\none table's pages sequentially. In this experiment, there are no indexes or\ntable joins.\n\nWhen we send concurrent Select queries to these tables, query performance\ndoesn't scale out with the number of CPU cores. We find that complex Select\nqueries scale out better than simpler ones. We also find that increasing\nthe block size from 8 KB to 32 KB, or increasing shared_buffers to include\nthe working set mitigates the problem to some extent.\n\nFor our experiments, we chose an 8-core machine with 68 GB of memory from\nAmazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and\nset shared_buffers to 4 GB.\n\nWe then generated 1, 2, 4, and 8 separate tables using the data generator\nfrom the industry standard TPC-H benchmark. Each table we generated, called\nlineitem-1, lineitem-2, etc., had about 750 MB of data. Next, we sent 1, 2,\n4, and 8 concurrent Select queries to these tables to observe the scale out\nbehavior. Our expectation was that since this machine had 8 cores, our run\ntimes would stay constant all throughout. Also, we would have expected the\nmachine's CPU utilization to go up to 100% at 8 concurrent queries. Neither\nof those assumptions held true.\n\nWe found that query run times degraded as we increased the number of\nconcurrent Select queries. Also, CPU utilization flattened out at less than\n50% for the simpler queries. Full results with block size of 8KB are below:\n\n Table select count(*) TPC-H Simple (#6)[2]\n TPC-H Complex (#1)[1]\n1 Table / 1 query 1.5 s 2.5 s\n 8.4 s\n2 Tables / 2 queries 1.5 s 2.5 s\n 8.4 s\n4 Tables / 4 queries 2.0 s 2.9 s\n 8.8 s\n8 Tables / 8 queries 3.3 s 4.0 s\n 9.6 s\n\nWe then increased the block size (BLCKSZ) from 8 KB to 32 KB and recompiled\nPostgreSQL. This change had a positive impact on query completion times.\nHere are the new results with block size of 32 KB:\n\n Table select count(*) TPC-H Simple (#6)[2]\n TPC-H Complex (#1)[1]\n1 Table / 1 query 1.5 s 2.3 s\n 8.0 s\n2 Tables / 2 queries 1.5 s 2.3 s\n 8.0 s\n4 Tables / 4 queries 1.6 s 2.4 s\n 8.1 s\n8 Tables / 8 queries 1.8 s 2.7 s\n 8.3 s\n\nAs a quick side, we also repeated the same experiment on an EC2 instance\nwith 16 CPU cores, and found that the scale out behavior became worse\nthere. (We also tried increasing the shared_buffers to 30 GB. This change\ncompletely solved the scaling out problem on this instance type, but hurt\nour performance on the hi1.4xlarge instances.)\n\nUnfortunately, increasing the block size from 8 to 32 KB has other\nimplications for some of our customers. Could you help us out with the\nproblem here?\n\nWhat can we do to identify the problem's root cause? Can we work around it?\n\nThank you,\nMetin\n\n[1] http://examples.citusdata.com/tpch_queries.html#query-1\n[2] http://examples.citusdata.com/tpch_queries.html#query-6\n\nWe have several independent tables on a multi-core machine serving Select queries. These tables fit into memory; and each Select queries goes over one table's pages sequentially. In this experiment, there are no indexes or table joins.\nWhen we send concurrent Select queries to these tables, query performance doesn't scale out with the number of CPU cores. We find that complex Select queries scale out better than simpler ones. We also find that increasing the block size from 8 KB to 32 KB, or increasing shared_buffers to include the working set mitigates the problem to some extent.\nFor our experiments, we chose an 8-core machine with 68 GB of memory from Amazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and set shared_buffers to 4 GB.\nWe then generated 1, 2, 4, and 8 separate tables using the data generator from the industry standard TPC-H benchmark. Each table we generated, called lineitem-1, lineitem-2, etc., had about 750 MB of data. Next, we sent 1, 2, 4, and 8 concurrent Select queries to these tables to observe the scale out behavior. Our expectation was that since this machine had 8 cores, our run times would stay constant all throughout. Also, we would have expected the machine's CPU utilization to go up to 100% at 8 concurrent queries. Neither of those assumptions held true.\nWe found that query run times degraded as we increased the number of concurrent Select queries. Also, CPU utilization flattened out at less than 50% for the simpler queries. Full results with block size of 8KB are below:\n                         Table select count(*)    TPC-H Simple (#6)[2]    TPC-H Complex (#1)[1]1 Table  / 1 query               1.5 s                    2.5 s                  8.4 s\n2 Tables / 2 queries             1.5 s                    2.5 s                  8.4 s4 Tables / 4 queries             2.0 s                    2.9 s                  8.8 s\n8 Tables / 8 queries             3.3 s                    4.0 s                  9.6 sWe then increased the block size (BLCKSZ) from 8 KB to 32 KB and recompiled PostgreSQL. This change had a positive impact on query completion times. Here are the new results with block size of 32 KB:\n                         Table select count(*)    TPC-H Simple (#6)[2]    TPC-H Complex (#1)[1]1 Table  / 1 query               1.5 s                    2.3 s                  8.0 s\n2 Tables / 2 queries             1.5 s                    2.3 s                  8.0 s4 Tables / 4 queries             1.6 s                    2.4 s                  8.1 s\n8 Tables / 8 queries             1.8 s                    2.7 s                  8.3 sAs a quick side, we also repeated the same experiment on an EC2 instance with 16 CPU cores, and found that the scale out behavior became worse there. (We also tried increasing the shared_buffers to 30 GB. This change completely solved the scaling out problem on this instance type, but hurt our performance on the hi1.4xlarge instances.)\nUnfortunately, increasing the block size from 8 to 32 KB has other implications for some of our customers. Could you help us out with the problem here?\nWhat can we do to identify the problem's root cause? Can we work around it?\nThank you,Metin[1] http://examples.citusdata.com/tpch_queries.html#query-1\n[2] http://examples.citusdata.com/tpch_queries.html#query-6", "msg_date": "Wed, 27 Nov 2013 10:28:30 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel Select query performance and shared buffers" }, { "msg_contents": "On Wed, Nov 27, 2013 at 2:28 AM, Metin Doslu <[email protected]> wrote:\n> We have several independent tables on a multi-core machine serving Select\n> queries. These tables fit into memory; and each Select queries goes over one\n> table's pages sequentially. In this experiment, there are no indexes or\n> table joins.\n>\n> When we send concurrent Select queries to these tables, query performance\n> doesn't scale out with the number of CPU cores. We find that complex Select\n> queries scale out better than simpler ones. We also find that increasing the\n> block size from 8 KB to 32 KB, or increasing shared_buffers to include the\n> working set mitigates the problem to some extent.\n>\n> For our experiments, we chose an 8-core machine with 68 GB of memory from\n> Amazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and set\n> shared_buffers to 4 GB.\n>\n> We then generated 1, 2, 4, and 8 separate tables using the data generator\n> from the industry standard TPC-H benchmark. Each table we generated, called\n> lineitem-1, lineitem-2, etc., had about 750 MB of data. Next, we sent 1, 2,\n> 4, and 8 concurrent Select queries to these tables to observe the scale out\n> behavior. Our expectation was that since this machine had 8 cores, our run\n> times would stay constant all throughout. Also, we would have expected the\n> machine's CPU utilization to go up to 100% at 8 concurrent queries. Neither\n> of those assumptions held true.\n>\n> We found that query run times degraded as we increased the number of\n> concurrent Select queries. Also, CPU utilization flattened out at less than\n> 50% for the simpler queries. Full results with block size of 8KB are below:\n>\n> Table select count(*) TPC-H Simple (#6)[2]\n> TPC-H Complex (#1)[1]\n> 1 Table / 1 query 1.5 s 2.5 s\n> 8.4 s\n> 2 Tables / 2 queries 1.5 s 2.5 s\n> 8.4 s\n> 4 Tables / 4 queries 2.0 s 2.9 s\n> 8.8 s\n> 8 Tables / 8 queries 3.3 s 4.0 s\n> 9.6 s\n>\n> We then increased the block size (BLCKSZ) from 8 KB to 32 KB and recompiled\n> PostgreSQL. This change had a positive impact on query completion times.\n> Here are the new results with block size of 32 KB:\n>\n> Table select count(*) TPC-H Simple (#6)[2]\n> TPC-H Complex (#1)[1]\n> 1 Table / 1 query 1.5 s 2.3 s\n> 8.0 s\n> 2 Tables / 2 queries 1.5 s 2.3 s\n> 8.0 s\n> 4 Tables / 4 queries 1.6 s 2.4 s\n> 8.1 s\n> 8 Tables / 8 queries 1.8 s 2.7 s\n> 8.3 s\n>\n> As a quick side, we also repeated the same experiment on an EC2 instance\n> with 16 CPU cores, and found that the scale out behavior became worse there.\n> (We also tried increasing the shared_buffers to 30 GB. This change\n> completely solved the scaling out problem on this instance type, but hurt\n> our performance on the hi1.4xlarge instances.)\n>\n> Unfortunately, increasing the block size from 8 to 32 KB has other\n> implications for some of our customers. Could you help us out with the\n> problem here?\n>\n> What can we do to identify the problem's root cause? Can we work around it?\n\nI'm curious if you have a hardware 8 core or better box laying around\nto replicate the test on. I've noticed scaling issues on virtual\nplatforms also.\n\nI'm guessing you're getting lwlock bounced around on either\nBufMappingPartitionLock (more likely) or the BufFreelistLock. Can\nyou/have you run a build with lock stats enabled?\n\nAlso, can I see a typical 'top' during poor scaling count(*) activity?\nIn particular, what's sys cpu%. I'm guessing it's non trivial.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 14:03:12 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Tue, Dec 10, 2013 at 5:03 PM, Merlin Moncure <[email protected]> wrote:\n> Also, can I see a typical 'top' during poor scaling count(*) activity?\n> In particular, what's sys cpu%. I'm guessing it's non trivial.\n\n\nThere was another thread, this seems like a mistaken double post or\nsomething like that.\n\nIn that other thread, he did provide that, and it was ~40% sy.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 18:06:11 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Tue, Dec 10, 2013 at 2:06 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Dec 10, 2013 at 5:03 PM, Merlin Moncure <[email protected]> wrote:\n>> Also, can I see a typical 'top' during poor scaling count(*) activity?\n>> In particular, what's sys cpu%. I'm guessing it's non trivial.\n>\n>\n> There was another thread, this seems like a mistaken double post or\n> something like that.\n>\n> In that other thread, he did provide that, and it was ~40% sy.\n\noops. disregard, I'll respond over there.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 14:20:11 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" } ]
[ { "msg_contents": "On Mon, Nov 25, 2013 at 4:00 PM, Gudmundsson Martin (mg)\n<[email protected]> wrote:\n>\n> I would also make sure to check that the hypervisor does write to permanent storage before returning to the VM with acknowledgement.\n>\nIn the case of ESX, there is no such concern per\nhttp://kb.vmware.com/kb/1008542.\n\nAs Heikki commented, VMware recently compared Postgres performance in\nan ESX (5.1) VM versus in a comparable native Linux. We saw 1.\nESX-level locking causes no vertical scalability degradation, 2.\nMemory oversubscription can indeed be a performance hazard when\nconsolidating mulitple Postgres VMs on one host. Yet we found moderate\nmemory oversubscription (up to 20%) might work out fine: we saw <5%\ndegradation at 20% memory oversubscription in a conventional setup\n(where Postgres server uses 25% memory shared_buffers and VM uses\nout-of-the-box kernel-level memory ballooning.) Nitty-gritty details\ncan be found in the whitepaper\nhttp://www.vmware.com/files/pdf/techpaper/vPostgres-perf.pdf\n(Disclaimer: I'm a author.)\n\nAs many pointed out here, storage is most likely where extra care of\ncapacity planning can be used when weighing putting Postgres in a VM\nversus natively. Our tests (during the same period as those towards\nthe above observations) read: pgbench default saw ~10% degradation at\n28 pgbench clients on a 32-core Intel Sandy Bridge machine; and dbt2\nwith zero thinking/keying/ time saw ~30% degradation at 28 dbt2\nterminals on the same machine. In both cases, the regression is\ngradually and increasingly more pronounced as concurrency ramps up\n(starting from <5% degradation at 1 client/terminal in both cases.)\n\nRegards,\nDong\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 27 Nov 2013 21:58:20 -0500", "msg_from": "Dong Ye <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "\n\n\n> >\n> > I would also make sure to check that the hypervisor does write to\n> permanent storage before returning to the VM with acknowledgement.\n> >\n> In the case of ESX, there is no such concern per\n> http://kb.vmware.com/kb/1008542.\n\nVery useful info!\n\n> As Heikki commented, VMware recently compared Postgres performance in\n> an ESX (5.1) VM versus in a comparable native Linux. We saw 1.\n> ESX-level locking causes no vertical scalability degradation, 2.\n> Memory oversubscription can indeed be a performance hazard when\n> consolidating mulitple Postgres VMs on one host. Yet we found moderate\n> memory oversubscription (up to 20%) might work out fine: we saw <5%\n> degradation at 20% memory oversubscription in a conventional setup\n> (where Postgres server uses 25% memory shared_buffers and VM uses\n> out-of-the-box kernel-level memory ballooning.) Nitty-gritty details\n> can be found in the whitepaper\n> http://www.vmware.com/files/pdf/techpaper/vPostgres-perf.pdf\n> (Disclaimer: I'm a author.)\n\nInteresting reading. \n\nThere was some earlier comment in this discussion about not using NFS datastores for Postgres VMDK's. Would you think you'd see a difference in scalability behavior or performance in these tests if a NFS datastore would be used instead? Provided the architecture is properly setup for that, with high speed low latency networking, and fast NAS storage.\n\n\nThanks!\n\n> Regards,\n> Dong\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 28 Nov 2013 08:45:24 +0000", "msg_from": "\"Gudmundsson Martin (mg)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Wed, Nov 27, 2013 at 7:58 PM, Dong Ye <[email protected]> wrote:\n\n> As Heikki commented, VMware recently compared Postgres performance in\n> an ESX (5.1) VM versus in a comparable native Linux. We saw 1.\n> ESX-level locking causes no vertical scalability degradation, 2.\n\nFYI Vmware has an optimized version of Postgresql for use on VSphere\netc: http://www.vmware.com/products/vfabric-postgres/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 28 Nov 2013 11:40:10 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" }, { "msg_contents": "On Fri, Nov 29, 2013 at 3:40 AM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Nov 27, 2013 at 7:58 PM, Dong Ye <[email protected]> wrote:\n>\n>> As Heikki commented, VMware recently compared Postgres performance in\n>> an ESX (5.1) VM versus in a comparable native Linux. We saw 1.\n>> ESX-level locking causes no vertical scalability degradation, 2.\n>\n> FYI Vmware has an optimized version of Postgresql for use on VSphere\n> etc: http://www.vmware.com/products/vfabric-postgres/\nThere is actually no fork of the core in vFabric Postgres, Postgres\ncore is unmodified as of release 9.3. Have a look at the release\nnotes:\nhttps://www.vmware.com/support/vfabric-postgres/doc/vfabric-postgres-93-release-notes.html\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 29 Nov 2013 10:41:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql in a Virtual Machine" } ]
[ { "msg_contents": "> There was some earlier comment in this discussion about not using NFS datastores for Postgres VMDK's. Would you think you'd see a difference in scalability behavior or performance in these tests if a NFS datastore would be used instead? Provided the architecture is properly setup for that, with high speed low latency networking, and fast NAS storage.\n>\nThough not first-hand experience, my understanding is that performance\nis not near the top of the list of considerations when weighing\ndifferent storage protocols. You might find the following docs useful:\nhttp://www.vmware.com/files/pdf/techpaper/Storage_Protocol_Comparison.pdf\nhttp://media.netapp.com/documents/tr-3916.pdf\n\nCheers,\nDong\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 28 Nov 2013 13:11:31 -0500", "msg_from": "Dong Ye <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql in a Virtual Machine" } ]
[ { "msg_contents": "Dear all,\nI have a quite strange problem running an extensive query on geo data\nchecking for crossing ways. I don't know if this is a postgres or postgis\nproblem, but I hope you can help. Running one thread is no problem, it\nfinishes within 10-15 minutes. Run two of those queries in parallel and\nthey will not finish within 24 hours. It is definitely not a caching or I/O\nproblem.\n\nFirst, the environment:\nRunning on a large server (32 cores, 128 GB RAM, fast RAID disks)\nI tested psql 8.1 / 9.1 / 9.3 and postgis 1.5 and 2.1.0 on Debian 6 and\nOpenSuse 12.3. All behave similar. The pgsql server settings were\noptimized using pgtune, wal logging and autovacuum is off.\n\nI'm working on a set of databases, each 5-10 GB big filled with OSM\ngeo data. I run many different queries, and I know the server can handle\nup to 8 parallel tasks without a decrease in performance compared to a\nsingle thread. Most data is kept in the cache and almost no read access\nto the disk needs to be done.\nEverything works well, despite one query, that runs on a table with ~ 1M\nentries. It searches for ways crossing each other:\nhttp://etherpad.netluchs.de/pgquery\n(The definition of the source table is included as well)\n\nHere is the explain analyze of the query:\nhttp://explain.depesz.com/s/fAcV\nAs you can see, the row estimate is far off, but the runtime of 11 minutes\nis acceptable, I think.\n\nWhen I run a second instance of this query in a unrelated database on the\nsame server, they take 100% CPU, no iowait and they do not finish even\nafter more than a day.\nAn explain done directly before executing the query shows a huge cost\nestimate and varying different plans:\nhttp://explain.depesz.com/s/XDR\nhttp://explain.depesz.com/s/SeG\n\nHow can two queries have such a strong influence on each other? Especially\nwhen the host server could handle even ten queries without problems?\nAnd most important: What can I do?\n\nThank you all in advance for your help!\nJan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 29 Nov 2013 23:07:59 +0100", "msg_from": "Jan Michel <[email protected]>", "msg_from_op": true, "msg_subject": "One query run twice in parallel results in huge performance decrease" }, { "msg_contents": "On Fri, Nov 29, 2013 at 2:07 PM, Jan Michel <[email protected]> wrote:\n\n>\n> When I run a second instance of this query in a unrelated database on the\n> same server, they take 100% CPU, no iowait and they do not finish even\n> after more than a day.\n>\n\nThe planner is not aware of what else is going on in the server, so it\ncan't change plans with that in mind. So I think that that is a red\nherring. I'd guess that the 2nd database is missing the geometry index, or\nhas it defined in some different way such that the database doesn't think\nit can be used.\n\nAre you sure that you get good plans when you run the exact same queries on\nthe exact same database/schema one at a time?\n\nCheers,\n\nJeff\n\nOn Fri, Nov 29, 2013 at 2:07 PM, Jan Michel <[email protected]> wrote:\n\nWhen I run a second instance of this query in a unrelated database on the\nsame server, they take 100% CPU, no iowait and they do not finish even\nafter more than a day.The planner is not aware of what else is going on in the server, so it can't change plans with that in mind.  So I think that that is a red herring.  I'd guess that the 2nd database is missing the geometry index, or has it defined in some different way such that the database doesn't think it can be used.\nAre you sure that you get good plans when you run the exact same queries on the exact same database/schema one at a time?Cheers,Jeff", "msg_date": "Fri, 29 Nov 2013 14:42:04 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One query run twice in parallel results in huge\n performance decrease" }, { "msg_contents": "Hi Jeff,\nthanks for the answer.\n\nOn 29.11.2013 23:42, Jeff Janes wrote:\n> The planner is not aware of what else is going on in the server\nI was not aware of this as well.\n\n> I'd guess that the 2nd database is missing the geometry index, or has \n> it defined in some different way such that the database doesn't think \n> it can be used. \nUnfortunately - no. E.g. the first problematic plan I posted is from the \nsame schema loaded with the same data as the one that works well.\nAll tables are generated freshly from scratch by the same script only \nminutes before this query is run. I tested them all individually and \nnever saw any problem, all use the same plan. As soon as I run two in \nparallel it happens. I also did a test by feeding two tables with \nidentical data - again the same problem.\n\nFirst I used tables in different schemas, then I tested to run them in \ndifferent databases. It had no influence. The thing is 100% reproducable \non three different machines with different hardware, different OS and \ndifferent pgsql versions. A single query is fast, as soon as a second \none comes in parallel it gets stuck. Every other query I have in the \ntoolchain does not show this behavior - and there are some quite \nexpensive ones as well.\n\nJan\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 30 Nov 2013 00:03:53 +0100", "msg_from": "Jan Michel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One query run twice in parallel results in huge performance\n decrease" }, { "msg_contents": "Jan Michel <[email protected]> writes:\n> All tables are generated freshly from scratch by the same script only \n> minutes before this query is run. I tested them all individually and \n> never saw any problem, all use the same plan. As soon as I run two in \n> parallel it happens. I also did a test by feeding two tables with \n> identical data - again the same problem.\n\nHm. Are you explicitly ANALYZE'ing the newly-built tables in your script,\nor are you just trusting auto-analyze to get the job done? It seems\npossible that auto-analyze manages to finish before you start your big\nquery if there's just one set of tables to analyze, but not if there's two\nsets. That would explain bad choices of plans ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 29 Nov 2013 18:48:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One query run twice in parallel results in huge performance\n decrease" }, { "msg_contents": "On 30.11.2013 00:48, Tom Lane wrote:\n> Are you explicitly ANALYZE'ing the newly-built tables in your script,\n> or are you just trusting auto-analyze to get the job done?\nHi Tom,\nthere is an explicit analyze of the table being done between filling the \ntable with values and running this query. I sketched the process of \ncreating and filling the table here:\nhttp://etherpad.netluchs.de/pgquery\n\nJan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 30 Nov 2013 12:05:47 +0100", "msg_from": "Jan Michel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One query run twice in parallel results in huge performance\n decrease" }, { "msg_contents": "On Fri, Nov 29, 2013 at 3:03 PM, Jan Michel <[email protected]> wrote:\n\n> Hi Jeff,\n> thanks for the answer.\n>\n>\n> On 29.11.2013 23:42, Jeff Janes wrote:\n>\n>> The planner is not aware of what else is going on in the server\n>>\n> I was not aware of this as well.\n>\n>\n> I'd guess that the 2nd database is missing the geometry index, or has it\n>> defined in some different way such that the database doesn't think it can\n>> be used.\n>>\n> Unfortunately - no. E.g. the first problematic plan I posted is from the\n> same schema loaded with the same data as the one that works well.\n> All tables are generated freshly from scratch by the same script only\n> minutes before this query is run. I tested them all individually and never\n> saw any problem, all use the same plan. As soon as I run two in parallel it\n> happens. I also did a test by feeding two tables with identical data -\n> again the same problem.\n>\n> First I used tables in different schemas, then I tested to run them in\n> different databases. It had no influence. The thing is 100% reproducable on\n> three different machines with different hardware, different OS and\n> different pgsql versions. A single query is fast, as soon as a second one\n> comes in parallel it gets stuck. Every other query I have in the toolchain\n> does not show this behavior - and there are some quite expensive ones as\n> well.\n\n\n\nI think what I would do next is EXPLAIN (without ANALYZE) one of the\nqueries repeatedly, say once a second, while the other query either runs or\ndoesn't run repeatedly, that is the other query runs for 11 minutes (or\nhowever it takes to run), and then sleeps for 11 minutes in a loop. Then\nyou can see if the explain plan differs very reliably, and if the\ntransition is exactly aligned with the other starting and stopping or if it\nis offset.\n\nCheers,\n\nJeff\n\nOn Fri, Nov 29, 2013 at 3:03 PM, Jan Michel <[email protected]> wrote:\nHi Jeff,\nthanks for the answer.\n\nOn 29.11.2013 23:42, Jeff Janes wrote:\n\nThe planner is not aware of what else is going on in the server\n\nI was not aware of this as well.\n\n\nI'd guess that the 2nd database is missing the geometry index, or has it defined in some different way such that the database doesn't think it can be used. \n\nUnfortunately - no. E.g. the first problematic plan I posted is from the same schema loaded with the same data as the one that works well.\nAll tables are generated freshly from scratch by the same script only minutes before this query is run. I tested them all individually and never saw any problem, all use the same plan. As soon as I run two in parallel it happens. I also did a test by feeding two tables with identical data - again the same problem.\n\nFirst I used tables in different schemas, then I tested to run them in different databases. It had no influence. The thing is 100% reproducable on three different machines with different hardware, different OS and different pgsql versions. A single query is fast, as soon as a second one comes in parallel it gets stuck. Every other query I have in the toolchain does not show this behavior - and there are some quite expensive ones as well.\nI think what I would do next is EXPLAIN (without ANALYZE) one of the queries repeatedly, say once a second, while the other query either runs or doesn't run repeatedly, that is the other query runs for 11 minutes (or however it takes to run), and then sleeps for 11 minutes in a loop.  Then you can see if the explain plan differs very reliably, and if the transition is exactly aligned with the other starting and stopping or if it is offset.\nCheers,Jeff", "msg_date": "Mon, 2 Dec 2013 15:17:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One query run twice in parallel results in huge\n performance decrease" }, { "msg_contents": "On Fri, Nov 29, 2013 at 3:07 PM, Jan Michel <[email protected]> wrote:\n> Dear all,\n> I have a quite strange problem running an extensive query on geo data\n> checking for crossing ways. I don't know if this is a postgres or postgis\n> problem, but I hope you can help. Running one thread is no problem, it\n> finishes within 10-15 minutes. Run two of those queries in parallel and\n> they will not finish within 24 hours. It is definitely not a caching or I/O\n> problem.\n\nWhat does your IO subsystem look like when you're running the query\nboth once and twice?\n\niostat, vmstat, iotop etc area ll useful here.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 2 Dec 2013 18:59:46 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One query run twice in parallel results in huge\n performance decrease" }, { "msg_contents": "Jeff Janes wrote:\n> I think what I would do next is EXPLAIN (without ANALYZE) one of the \n> queries repeatedly, say once a second, while the other query either \n> runs or doesn't run repeatedly, that is the other query runs for 11 \n> minutes (or however it takes to run), and then sleeps for 11 minutes \n> in a loop. Then you can see if the explain plan differs very \n> reliably, and if the transition is exactly aligned with the other \n> starting and stopping or if it is offset.\n\nHi Jeff,\nI ran the one analyze over and over again as you proposed - but the \nresult never changed.\nBut I think I found a solution for the problem. While browsing through \nthe manual I found a statement about GIN indexes:\n\"For tables with GIN indexes, VACUUM (in any form) also completes any \npending index insertions, by moving pending index entries to the \nappropriate places in the main GIN index structure\". I use a gist and no \ngin index, but I tried to vacuum the (freshly filled) table, and it \nhelped. It seems that the planer is simply not aware of the existence of \nthe index although I run an analyze on the table right before the query.\n\nThank you all for your suggestions!\nJan\n\n\n\n\n\n\n\nJeff Janes wrote:\n\n\n\n\nI think what I would do next is\n EXPLAIN (without ANALYZE) one of the queries repeatedly, say\n once a second, while the other query either runs or doesn't\n run repeatedly, that is the other query runs for 11 minutes\n (or however it takes to run), and then sleeps for 11 minutes\n in a loop.  Then you can see if the explain plan differs\n very reliably, and if the transition is exactly aligned with\n the other starting and stopping or if it is offset.\n\n\n\n\n\n Hi Jeff,\n I ran the one analyze over and over again as you proposed - but the\n result never changed. \n But I think I found a solution for the problem. While browsing\n through the manual I found a statement about GIN indexes:\n \"For tables with GIN indexes, VACUUM (in any form) also completes any\n pending index insertions, by moving pending index entries to the\n appropriate places in the main GIN\n index structure\". I use a gist and no gin index, but I tried to\n vacuum the (freshly filled) table, and it helped. It seems that the\n planer is simply not aware of the existence of the index although I\n run an analyze on the table right before the query.\n\n Thank you all for your suggestions!\n Jan", "msg_date": "Tue, 03 Dec 2013 21:18:39 +0100", "msg_from": "Jan Michel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One query run twice in parallel results in huge performance\n decrease" } ]
[ { "msg_contents": "Friends, i need help. \n\nI have query below that running well so far. it needs only 5.335 second to get data from 803.583 records. Here is the query :\n\nwith qry1 as \n(select tanggal, extract(month from tanggal) as bulan, tblsupplier.id, nama, kodebarang, namabarang, keluar, \n\tcase when discount<=100 then\n\t keluar*(harga -(discount/100*harga))\n\twhen tbltransaksi.discount>100 then\n\t\tkeluar*(harga-discount)\n\tend \n as jumlah\nfrom tbltransaksi \njoin tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\njoin tblsupplier on tblproduk.supplierid=tblsupplier.id\nwhere jualid is not null \nand extract(year from tanggal)='2013')\n\nselect \n id, nama, kodebarang, namabarang,\n sum(case when bulan = 1 then keluar else 0 end) as Jan,\n sum(case when bulan = 2 then keluar else 0 end) as Feb,\n sum(case when bulan = 3 then keluar else 0 end) as Maret,\n sum(case when bulan = 4 then keluar else 0 end) as April,\n sum(case when bulan = 5 then keluar else 0 end) as Mei,\n sum(case when bulan = 6 then keluar else 0 end) as Juni,\n sum(case when bulan = 7 then keluar else 0 end) as Juli,\n sum(case when bulan = 8 then keluar else 0 end) as Agust,\n sum(case when bulan = 9 then keluar else 0 end) as Sept,\n sum(case when bulan = 10 then keluar else 0 end) as Okt,\n sum(case when bulan = 11 then keluar else 0 end) as Nov,\n sum(case when bulan = 12 then keluar else 0 end) as Des,\n sum(coalesce(keluar,0)) as total\nfrom qry1\ngroup by id, nama, kodebarang, namabarang\norder by total desc\nlimit 1000\n\nBut the problem is : when i change the where clause to :\n\nwhere jualid is not null or returjualid is not null\nand extract(year from tanggal)='2013')\n\n\n(there is additional or returjualid is not null,) the query needs 56 second to display the result. 10 times longer.\nIs there anyway to speed up the query ? My server is Dell PowerEdge T110II, Intel Xeon E1230 Sandy bridge 3.2GHZ, 4GB memory, 500GB Sata III HDD running on Ubuntu server 12.04, PostgreSql 9.3\n\nPostgresqlconf :\nmax_connections=50\nshared_buffers=1024MB\nwall_buffers=16MB\nmax_prepared_transactions=0\nwork_mem=50MB\nmaintenance_work_mem=256MB\n\nAnalyze result :\n\nOperation Operation Info Start-up Cost Total Cost Number of Rows Row Width \nLimit CTE qry1 28553.93 28554.89 384 376 \n |--Hash Join Hash Cond: ((tblproduk.supplierid)::text = (tblsup 3274.11 28179.15 3832 84 \n |--Hash Join Hash Cond: ((tbltransaksi.kodebarang)::text = (tbl 3252.43 28008.98 3832 67 \n |--Seq Scan on tbltransaks Filter: ((jualid IS NOT NULL) AND (date_part('year 0.00 24684.70 3832 29 \n |--Hash null 2188.30 2188.30 85130 51 \n |--Seq Scan on tblproduk null 0.00 2188.30 85130 51 \n |--Hash null 14.08 14.08 608 26 \n |--Seq Scan on tblsupplier null 0.00 14.08 608 26 \nSort Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric))) 374.78 375.74 384 376 \n |--HashAggregate null 354.46 358.30 384 376 \n |--CTE Scan on qry1 null 0.00 76.64 3832 376 \n\n\nthe table transaksi :\n\nCREATE TABLE public.tbltransaksi (\n id INTEGER NOT NULL,\n tanggal DATE,\n kodebarang VARCHAR(20),\n masuk NUMERIC(10,2) DEFAULT 0,\n keluar NUMERIC(10,2) DEFAULT 0,\n satuan VARCHAR(5),\n keterangan VARCHAR(30),\n jenis VARCHAR(5),\n harga NUMERIC(15,2) DEFAULT 0,\n discount NUMERIC(10,2) DEFAULT 0,\n jualid INTEGER,\n beliid INTEGER,\n mutasiid INTEGER,\n nobukti VARCHAR(20),\n customerid VARCHAR(20),\n modal NUMERIC(15,2) DEFAULT 0,\n awalid INTEGER,\n terimabrgid INTEGER,\n opnameid INTEGER,\n returjualid INTEGER,\n returbeliid INTEGER,\n CONSTRAINT tbltransaksi_pkey PRIMARY KEY(id),\n CONSTRAINT tbltransaksi_fk FOREIGN KEY (returjualid)\n REFERENCES public.tblreturjual(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n DEFERRABLE\n INITIALLY IMMEDIATE,\n CONSTRAINT tbltransaksi_fk1 FOREIGN KEY (jualid)\n REFERENCES public.tblpenjualan(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n NOT DEFERRABLE,\n CONSTRAINT tbltransaksi_fk2 FOREIGN KEY (beliid)\n REFERENCES public.tblpembelian(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n NOT DEFERRABLE,\n CONSTRAINT tbltransaksi_fk3 FOREIGN KEY (mutasiid)\n REFERENCES public.tblmutasi(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n NOT DEFERRABLE,\n CONSTRAINT tbltransaksi_fk4 FOREIGN KEY (returbeliid)\n REFERENCES public.tblreturbeli(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n NOT DEFERRABLE\n) \nWITH (oids = false);\n\nCREATE INDEX tbltransaksi_idx ON public.tbltransaksi\n USING btree (tanggal);\n\nCREATE INDEX tbltransaksi_idx1 ON public.tbltransaksi\n USING btree (kodebarang COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tbltransaksi_idx2 ON public.tbltransaksi\n USING btree (customerid COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tbltransaksi_idx3 ON public.tbltransaksi\n USING btree (awalid);\n\nCREATE INDEX tbltransaksi_idx4 ON public.tbltransaksi\n USING btree (jualid);\n\nCREATE INDEX tbltransaksi_idx5 ON public.tbltransaksi\n USING btree (beliid);\n\nCREATE INDEX tbltransaksi_idx6 ON public.tbltransaksi\n USING btree (mutasiid);\n\nCREATE INDEX tbltransaksi_idx7 ON public.tbltransaksi\n USING btree (opnameid);\n\nCREATE INDEX tbltransaksi_idx8 ON public.tbltransaksi\n USING btree (returjualid);\n\nCREATE INDEX tbltransaksi_idx9 ON public.tbltransaksi\n USING btree (returbeliid);\n\n\n\nHope i can get answer here. Thank you.\nFriends, i need help. I have query below that running well so far. it needs only 5.335 second to get data from 803.583 records. Here is the query :with qry1 as (select tanggal, extract(month from tanggal) as bulan, tblsupplier.id, nama, kodebarang, namabarang, keluar,  case when discount<=100 then    keluar*(harga -(discount/100*harga)) when tbltransaksi.discount>100 then keluar*(harga-discount) end     as jumlahfrom tbltransaksi join tblproduk on tbltransaksi.kodebarang=tblproduk.produkidjoin tblsupplier on tblproduk.supplierid=tblsupplier.idwhere jualid is not null and extract(year from tanggal)='2013')select    id, nama, kodebarang, namabarang,  sum(case when bulan = 1 then keluar else 0 end) as Jan,  sum(case when bulan = 2 then keluar else 0 end) as Feb,  sum(case when bulan = 3 then keluar else 0 end) as Maret,  sum(case when bulan = 4 then keluar else 0 end) as April,  sum(case when bulan = 5 then keluar else 0 end) as Mei,  sum(case when bulan = 6 then keluar else 0 end) as Juni,  sum(case when bulan = 7 then keluar else 0 end) as Juli,  sum(case when bulan = 8 then keluar else 0 end) as Agust,  sum(case when bulan = 9 then keluar else 0 end) as Sept,  sum(case when bulan = 10 then keluar else 0 end) as Okt,  sum(case when bulan = 11 then keluar else 0 end) as Nov,  sum(case when bulan = 12 then keluar else 0 end) as Des,  sum(coalesce(keluar,0)) as totalfrom qry1group by id, nama, kodebarang, namabarangorder by total desclimit 1000But the problem is : when i change the where clause to :where jualid is not null or returjualid is not nulland extract(year from tanggal)='2013')(there is additional or returjualid is not null,) the query needs 56 second to display the result. 10 times longer.Is there anyway to speed up the query ? My server is Dell PowerEdge T110II, Intel Xeon E1230 Sandy bridge 3.2GHZ, 4GB memory, 500GB Sata III HDD running on Ubuntu server 12.04, PostgreSql 9.3Postgresqlconf :max_connections=50shared_buffers=1024MBwall_buffers=16MBmax_prepared_transactions=0work_mem=50MBmaintenance_work_mem=256MBAnalyze result :Operation                        Operation Info                                       Start-up Cost                    Total Cost                       Number of Rows                   Row Width                       Limit                              CTE qry1                                           28553.93                         28554.89                         384                              376                               |--Hash Join                     Hash Cond: ((tblproduk.supplierid)::text = (tblsup 3274.11                          28179.15                         3832                             84                                  |--Hash Join                   Hash Cond: ((tbltransaksi.kodebarang)::text = (tbl 3252.43                          28008.98                         3832                             67                                    |--Seq Scan on tbltransaks   Filter: ((jualid IS NOT NULL) AND (date_part('year 0.00                             24684.70                         3832                             29                                    |--Hash                    null                                                 2188.30                          2188.30                          85130                            51                                      |--Seq Scan on tblproduk null                                                 0.00                             2188.30                          85130                            51                                  |--Hash                      null                                                 14.08                            14.08                            608                              26                                    |--Seq Scan on tblsupplier null                                                 0.00                             14.08                            608                              26                              Sort                               Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric))) 374.78                           375.74                           384                              376                               |--HashAggregate               null                                                 354.46                           358.30                           384                              376                                 |--CTE Scan on qry1          null                                                 0.00                             76.64                            3832                             376                             the table transaksi :CREATE TABLE public.tbltransaksi (  id INTEGER NOT NULL,  tanggal DATE,  kodebarang VARCHAR(20),  masuk NUMERIC(10,2) DEFAULT 0,  keluar NUMERIC(10,2) DEFAULT 0,  satuan VARCHAR(5),  keterangan VARCHAR(30),  jenis VARCHAR(5),  harga NUMERIC(15,2) DEFAULT 0,  discount NUMERIC(10,2) DEFAULT 0,  jualid INTEGER,  beliid INTEGER,  mutasiid INTEGER,  nobukti VARCHAR(20),  customerid VARCHAR(20),  modal NUMERIC(15,2) DEFAULT 0,  awalid INTEGER,  terimabrgid INTEGER,  opnameid INTEGER,  returjualid INTEGER,  returbeliid INTEGER,  CONSTRAINT tbltransaksi_pkey PRIMARY KEY(id),  CONSTRAINT tbltransaksi_fk FOREIGN KEY (returjualid)    REFERENCES public.tblreturjual(id)    ON DELETE CASCADE    ON UPDATE NO ACTION    DEFERRABLE    INITIALLY IMMEDIATE,  CONSTRAINT tbltransaksi_fk1 FOREIGN KEY (jualid)    REFERENCES public.tblpenjualan(id)    ON DELETE CASCADE    ON UPDATE NO ACTION    NOT DEFERRABLE,  CONSTRAINT tbltransaksi_fk2 FOREIGN KEY (beliid)    REFERENCES public.tblpembelian(id)    ON DELETE CASCADE    ON UPDATE NO ACTION    NOT DEFERRABLE,  CONSTRAINT tbltransaksi_fk3 FOREIGN KEY (mutasiid)    REFERENCES public.tblmutasi(id)    ON DELETE CASCADE    ON UPDATE NO ACTION    NOT DEFERRABLE,  CONSTRAINT tbltransaksi_fk4 FOREIGN KEY (returbeliid)    REFERENCES public.tblreturbeli(id)    ON DELETE CASCADE    ON UPDATE NO ACTION    NOT DEFERRABLE) WITH (oids = false);CREATE INDEX tbltransaksi_idx ON public.tbltransaksi  USING btree (tanggal);CREATE INDEX tbltransaksi_idx1 ON public.tbltransaksi  USING btree (kodebarang COLLATE pg_catalog.\"default\");CREATE INDEX tbltransaksi_idx2 ON public.tbltransaksi  USING btree (customerid COLLATE pg_catalog.\"default\");CREATE INDEX tbltransaksi_idx3 ON public.tbltransaksi  USING btree (awalid);CREATE INDEX tbltransaksi_idx4 ON public.tbltransaksi  USING btree (jualid);CREATE INDEX tbltransaksi_idx5 ON public.tbltransaksi  USING btree (beliid);CREATE INDEX tbltransaksi_idx6 ON public.tbltransaksi  USING btree (mutasiid);CREATE INDEX tbltransaksi_idx7 ON public.tbltransaksi  USING btree (opnameid);CREATE INDEX tbltransaksi_idx8 ON public.tbltransaksi  USING btree (returjualid);CREATE INDEX tbltransaksi_idx9 ON public.tbltransaksi  USING btree (returbeliid);Hope i can get answer here. Thank you.", "msg_date": "Sun, 1 Dec 2013 14:21:05 +0800", "msg_from": "Hengky Liwandouw <[email protected]>", "msg_from_op": true, "msg_subject": "Speed up the query" }, { "msg_contents": "Hengky Liwandouw <[email protected]> wrote:\n> \n> But the problem is : when i change the where clause to :\n> \n> where jualid is not null or returjualid is not null\n> and extract(year from tanggal)='2013')\n\nTry to create this index:\n\ncreate index xxx on public.tbltransaksi((extract(year from tanggal)))\nwhere jualid is not null or returjualid is not null;\n\nan run the query again, and if this not helps show us explain analyse,\nyou can use explain.depesz.com to provide us the plan.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 08:12:31 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Andreas, sorry this is the correct analyse for the query.\n\nThis is the index i created :\n\nCREATE INDEX tbltransaksi_idx10 ON public.tbltransaksi\n USING btree ((date_part('year'::text, tanggal)));\n\nThis is the analyse of the query\n\n\"Limit (cost=346377.92..346380.42 rows=1000 width=376)\"\n\" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WH (...)\"\n\" CTE qry1\"\n\" -> Hash Join (cost=4444.64..62681.16 rows=766491 width=84)\"\n\" Output: tbltransaksi.tanggal, date_part('month'::text, (tbltransaksi.tanggal)::timestamp without time zone), tblsupplier.id, tblsupplier.nama, tbltransaksi.kodebarang, tblproduk.namabarang, tbltransaksi.keluar, CASE WHEN (tbltransaksi.discount <= (...)\"\n\" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n\" -> Seq Scan on public.tbltransaksi (cost=0.00..24702.53 rows=766491 width=29)\"\n\" Output: tbltransaksi.id, tbltransaksi.tanggal, tbltransaksi.kodebarang, tbltransaksi.masuk, tbltransaksi.keluar, tbltransaksi.satuan, tbltransaksi.keterangan, tbltransaksi.jenis, tbltransaksi.harga, tbltransaksi.discount, tbltransaksi.juali (...)\"\n\" Filter: ((tbltransaksi.jualid IS NOT NULL) OR ((tbltransaksi.returjualid IS NOT NULL) AND (date_part('year'::text, (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double precision)))\"\n\" -> Hash (cost=3380.52..3380.52 rows=85130 width=68)\"\n\" Output: tblproduk.namabarang, tblproduk.produkid, tblsupplier.id, tblsupplier.nama\"\n\" -> Hash Join (cost=21.68..3380.52 rows=85130 width=68)\"\n\" Output: tblproduk.namabarang, tblproduk.produkid, tblsupplier.id, tblsupplier.nama\"\n\" Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id)::text)\"\n\" -> Seq Scan on public.tblproduk (cost=0.00..2188.30 rows=85130 width=51)\"\n\" Output: tblproduk.produkid, tblproduk.namabarang, tblproduk.hargajual, tblproduk.subkategoriid, tblproduk.createby, tblproduk.kodepromo, tblproduk.satuan, tblproduk.foto, tblproduk.pajak, tblproduk.listingfee, tblproduk.supplier (...)\"\n\" -> Hash (cost=14.08..14.08 rows=608 width=26)\"\n\" Output: tblsupplier.id, tblsupplier.nama\"\n\" -> Seq Scan on public.tblsupplier (cost=0.00..14.08 rows=608 width=26)\"\n\" Output: tblsupplier.id, tblsupplier.nama\"\n\" -> Sort (cost=283696.76..283888.39 rows=76650 width=376)\"\n\" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(C (...)\"\n\" Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n\" -> GroupAggregate (cost=221240.80..279494.13 rows=76650 width=376)\"\n\" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END), sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END), sum( (...)\"\n\" -> Sort (cost=221240.80..223157.03 rows=766491 width=376)\"\n\" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, qry1.bulan, qry1.keluar\"\n\" Sort Key: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang\"\n\" -> CTE Scan on qry1 (cost=0.00..15329.82 rows=766491 width=376)\"\n\" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, qry1.bulan, qry1.keluar\"\n\n\nOn Dec 1, 2013, at 3:12 PM, Andreas Kretschmer wrote:\n\n> Hengky Liwandouw <[email protected]> wrote:\n>> \n>> But the problem is : when i change the where clause to :\n>> \n>> where jualid is not null or returjualid is not null\n>> and extract(year from tanggal)='2013')\n> \n> Try to create this index:\n> \n> create index xxx on public.tbltransaksi((extract(year from tanggal)))\n> where jualid is not null or returjualid is not null;\n> \n> an run the query again, and if this not helps show us explain analyse,\n> you can use explain.depesz.com to provide us the plan.\n> \n> \n> Andreas\n> -- \n> Really, I'm not out to destroy Microsoft. That will just be a completely\n> unintentional side effect. (Linus Torvalds)\n> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 16:31:48 +0800", "msg_from": "Hengky Liwandouw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Hengky Liwandouw <[email protected]> wrote:\n\n> Thanks Adreas,\n> \n> Already try your suggestion but it not help. This is the index i created :\n> \n> CREATE INDEX tbltransaksi_idx10 ON public.tbltransaksi\n> USING btree ((date_part('year'::text, tanggal)));\n\nI wrote:\n\n> create index xxx on public.tbltransaksi((extract(year from\n> tanggal))) where jualid is not null or returjualid is not null;\n\n2 lines, with the where-condition ;-)\n\nYour explain isn't a explain ANALYSE, and it's not for the 2nd query\n(with condition on returjualid)\n\nDo you have propper indexes on tblsupplier.id and tblproduk.produkid?\n\nI see seq-scans there...\n\n\n> \n> Speed is the same. Here is the analyse result :\n> \n> \"Limit (cost=11821.17..11822.13 rows=384 width=376)\"\n> \" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WH (...)\"\n> \" CTE qry1\"\n> \" -> Hash Join (cost=3353.66..11446.48 rows=3831 width=84)\"\n> \" Output: tbltransaksi.tanggal, date_part('month'::text, (tbltransaksi.tanggal)::timestamp without time zone), tblsupplier.id, tblsupplier.nama, tbltransaksi.kodebarang, tblproduk.namabarang, tbltransaksi.keluar, CASE WHEN (tbltransaksi.discount <= (...)\"\n> \" Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id)::text)\"\n> \" -> Hash Join (cost=3331.98..11276.35 rows=3831 width=67)\"\n> \" Output: tbltransaksi.tanggal, tbltransaksi.kodebarang, tbltransaksi.keluar, tbltransaksi.discount, tbltransaksi.harga, tblproduk.namabarang, tblproduk.supplierid\"\n> \" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n> \" -> Bitmap Heap Scan on public.tbltransaksi (cost=79.55..7952.09 rows=3831 width=29)\"\n> \" Output: tbltransaksi.id, tbltransaksi.tanggal, tbltransaksi.kodebarang, tbltransaksi.masuk, tbltransaksi.keluar, tbltransaksi.satuan, tbltransaksi.keterangan, tbltransaksi.jenis, tbltransaksi.harga, tbltransaksi.discount, tbltransaksi (...)\"\n> \" Recheck Cond: (date_part('year'::text, (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double precision)\"\n> \" Filter: (tbltransaksi.jualid IS NOT NULL)\"\n> \" -> Bitmap Index Scan on tbltransaksi_idx10 (cost=0.00..78.59 rows=4022 width=0)\"\n> \" Index Cond: (date_part('year'::text, (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double precision)\"\n> \" -> Hash (cost=2188.30..2188.30 rows=85130 width=51)\"\n> \" Output: tblproduk.namabarang, tblproduk.produkid, tblproduk.supplierid\"\n> \" -> Seq Scan on public.tblproduk (cost=0.00..2188.30 rows=85130 width=51)\"\n> \" Output: tblproduk.namabarang, tblproduk.produkid, tblproduk.supplierid\"\n> \" -> Hash (cost=14.08..14.08 rows=608 width=26)\"\n> \" Output: tblsupplier.id, tblsupplier.nama\"\n> \" -> Seq Scan on public.tblsupplier (cost=0.00..14.08 rows=608 width=26)\"\n> \" Output: tblsupplier.id, tblsupplier.nama\"\n> \" -> Sort (cost=374.69..375.65 rows=384 width=376)\"\n> \" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(C (...)\"\n> \" Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n> \" -> HashAggregate (cost=354.37..358.21 rows=384 width=376)\"\n> \" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END), sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END), sum( (...)\"\n> \" -> CTE Scan on qry1 (cost=0.00..76.62 rows=3831 width=376)\"\n> \" Output: qry1.tanggal, qry1.bulan, qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, qry1.keluar, qry1.jumlah\"\n> \n> On Dec 1, 2013, at 3:12 PM, Andreas Kretschmer wrote:\n> \n> > Hengky Liwandouw <[email protected]> wrote:\n> >> \n> >> But the problem is : when i change the where clause to :\n> >> \n> >> where jualid is not null or returjualid is not null\n> >> and extract(year from tanggal)='2013')\n> > \n> > Try to create this index:\n> > \n> > create index xxx on public.tbltransaksi((extract(year from tanggal)))\n> > where jualid is not null or returjualid is not null;\n> > \n> > an run the query again, and if this not helps show us explain analyse,\n> > you can use explain.depesz.com to provide us the plan.\n> > \n> > \n> > Andreas\n> > -- \n> > Really, I'm not out to destroy Microsoft. That will just be a completely\n> > unintentional side effect. (Linus Torvalds)\n> > \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n> > Kaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n> > \n> > \n> > -- \n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 09:35:41 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Ok, i just recreate the index :\n\nCREATE INDEX tbltransaksi_idx10\n ON tbltransaksi\n USING btree\n (date_part('year'::text, tanggal))\n WHERE jualid IS NOT NULL OR returjualid IS NOT NULL;\n\n(PGAdminIII always convert extract(year from tanggal) to date_part('year'::text,tanggal))\n\nThis is the product table\n\nCREATE TABLE public.tblproduk (\n produkid VARCHAR(20) NOT NULL,\n namabarang VARCHAR(50),\n hargajual NUMERIC(15,2) DEFAULT 0,\n subkategoriid VARCHAR(10),\n createby VARCHAR(10),\n kodepromo VARCHAR(10),\n satuan VARCHAR(5),\n foto BYTEA,\n pajak BOOLEAN,\n listingfee BOOLEAN,\n supplierid VARCHAR(20),\n modifyby VARCHAR(10),\n qtygrosir INTEGER DEFAULT 0,\n hargagrosir NUMERIC(15,2) DEFAULT 0,\n diskonjual NUMERIC(5,2) DEFAULT 0,\n modal NUMERIC(15,2) DEFAULT 0,\n CONSTRAINT tblproduk_pkey PRIMARY KEY(produkid)\n) \nWITH (oids = false);\n\nCREATE INDEX tblproduk_idx ON public.tblproduk\n USING btree (namabarang COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tblproduk_idx1 ON public.tblproduk\n USING btree (supplierid COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tblproduk_idx2 ON public.tblproduk\n USING btree (subkategoriid COLLATE pg_catalog.\"default\");\n\n\nSupplier table :\n\nCREATE TABLE public.tblsupplier (\n id VARCHAR(20) NOT NULL,\n nama VARCHAR(50),\n alamat VARCHAR(50),\n telepon VARCHAR(50),\n kontak VARCHAR(50),\n email VARCHAR(50),\n kota VARCHAR(50),\n hp VARCHAR(50),\n createby VARCHAR(10),\n modifyby VARCHAR(10),\n CONSTRAINT tblsupplier_pkey PRIMARY KEY(id)\n) \nWITH (oids = false);\n\nCREATE INDEX tblsupplier_idx ON public.tblsupplier\n USING btree (nama COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tblsupplier_idx1 ON public.tblsupplier\n USING btree (kota COLLATE pg_catalog.\"default\");\n\nTransaksi table :\n\nCREATE TABLE public.tbltransaksi (\n id INTEGER NOT NULL,\n tanggal DATE,\n kodebarang VARCHAR(20),\n masuk NUMERIC(10,2) DEFAULT 0,\n keluar NUMERIC(10,2) DEFAULT 0,\n satuan VARCHAR(5),\n keterangan VARCHAR(30),\n jenis VARCHAR(5),\n harga NUMERIC(15,2) DEFAULT 0,\n discount NUMERIC(10,2) DEFAULT 0,\n jualid INTEGER,\n beliid INTEGER,\n mutasiid INTEGER,\n nobukti VARCHAR(20),\n customerid VARCHAR(20),\n modal NUMERIC(15,2) DEFAULT 0,\n awalid INTEGER,\n terimabrgid INTEGER,\n opnameid INTEGER,\n returjualid INTEGER,\n returbeliid INTEGER,\n CONSTRAINT tbltransaksi_pkey PRIMARY KEY(id),\n CONSTRAINT tbltransaksi_fk FOREIGN KEY (returjualid)\n REFERENCES public.tblreturjual(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n DEFERRABLE\n INITIALLY IMMEDIATE,\n CONSTRAINT tbltransaksi_fk1 FOREIGN KEY (jualid)\n REFERENCES public.tblpenjualan(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n NOT DEFERRABLE,\n CONSTRAINT tbltransaksi_fk2 FOREIGN KEY (beliid)\n REFERENCES public.tblpembelian(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n NOT DEFERRABLE,\n CONSTRAINT tbltransaksi_fk3 FOREIGN KEY (mutasiid)\n REFERENCES public.tblmutasi(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n NOT DEFERRABLE,\n CONSTRAINT tbltransaksi_fk4 FOREIGN KEY (returbeliid)\n REFERENCES public.tblreturbeli(id)\n ON DELETE CASCADE\n ON UPDATE NO ACTION\n NOT DEFERRABLE\n) \nWITH (oids = false);\n\nCREATE INDEX tbltransaksi_idx ON public.tbltransaksi\n USING btree (tanggal);\n\nCREATE INDEX tbltransaksi_idx1 ON public.tbltransaksi\n USING btree (kodebarang COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tbltransaksi_idx10 ON public.tbltransaksi\n USING btree ((date_part('year'::text, tanggal)))\n WHERE ((jualid IS NOT NULL) OR (returjualid IS NOT NULL));\n\nCREATE INDEX tbltransaksi_idx2 ON public.tbltransaksi\n USING btree (customerid COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tbltransaksi_idx3 ON public.tbltransaksi\n USING btree (awalid);\n\nCREATE INDEX tbltransaksi_idx4 ON public.tbltransaksi\n USING btree (jualid);\n\nCREATE INDEX tbltransaksi_idx5 ON public.tbltransaksi\n USING btree (beliid);\n\nCREATE INDEX tbltransaksi_idx6 ON public.tbltransaksi\n USING btree (mutasiid);\n\nCREATE INDEX tbltransaksi_idx7 ON public.tbltransaksi\n USING btree (opnameid);\n\nCREATE INDEX tbltransaksi_idx8 ON public.tbltransaksi\n USING btree (returjualid);\n\nCREATE INDEX tbltransaksi_idx9 ON public.tbltransaksi\n USING btree (returbeliid);\n\n\nthe query that run slow:\n\nwith qry1 as \n(select tanggal, extract(month from tanggal) as bulan, tblsupplier.id, nama, kodebarang, namabarang, keluar, \n\tcase when discount<=100 then\n\t keluar*(harga -(discount/100*harga))\n\twhen tbltransaksi.discount>100 then\n\t\tkeluar*(harga-discount)\n\tend \n as jumlah\nfrom tbltransaksi \njoin tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\njoin tblsupplier on tblproduk.supplierid=tblsupplier.id\nwhere jualid is not null or returjualid is not null\nand extract(year from tanggal)='2013')\n\nselect \n id, nama, kodebarang, namabarang,\n sum(case when bulan = 1 then keluar else 0 end) as Jan,\n sum(case when bulan = 2 then keluar else 0 end) as Feb,\n sum(case when bulan = 3 then keluar else 0 end) as Maret,\n sum(case when bulan = 4 then keluar else 0 end) as April,\n sum(case when bulan = 5 then keluar else 0 end) as Mei,\n sum(case when bulan = 6 then keluar else 0 end) as Juni,\n sum(case when bulan = 7 then keluar else 0 end) as Juli,\n sum(case when bulan = 8 then keluar else 0 end) as Agust,\n sum(case when bulan = 9 then keluar else 0 end) as Sept,\n sum(case when bulan = 10 then keluar else 0 end) as Okt,\n sum(case when bulan = 11 then keluar else 0 end) as Nov,\n sum(case when bulan = 12 then keluar else 0 end) as Des,\n sum(coalesce(keluar,0)) as total\nfrom qry1\ngroup by id, nama, kodebarang, namabarang\norder by total desc\nlimit 1000\n\nthis is the explain analyse :\n\n\"Limit (cost=346389.90..346392.40 rows=1000 width=376) (actual time=56765.848..56766.229 rows=1000 loops=1)\"\n\" CTE qry1\"\n\" -> Hash Join (cost=4444.64..62683.91 rows=766519 width=84) (actual time=87.342..1786.851 rows=737662 loops=1)\"\n\" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n\" -> Seq Scan on tbltransaksi (cost=0.00..24704.06 rows=766519 width=29) (actual time=0.010..271.147 rows=767225 loops=1)\"\n\" Filter: ((jualid IS NOT NULL) OR ((returjualid IS NOT NULL) AND (date_part('year'::text, (tanggal)::timestamp without time zone) = 2013::double precision)))\"\n\" Rows Removed by Filter: 37441\"\n\" -> Hash (cost=3380.52..3380.52 rows=85130 width=68) (actual time=87.265..87.265 rows=65219 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 5855kB\"\n\" -> Hash Join (cost=21.68..3380.52 rows=85130 width=68) (actual time=0.748..59.469 rows=65219 loops=1)\"\n\" Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id)::text)\"\n\" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=51) (actual time=0.005..17.184 rows=85034 loops=1)\"\n\" -> Hash (cost=14.08..14.08 rows=608 width=26) (actual time=0.730..0.730 rows=609 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 28kB\"\n\" -> Seq Scan on tblsupplier (cost=0.00..14.08 rows=608 width=26) (actual time=0.006..0.298 rows=609 loops=1)\"\n\" -> Sort (cost=283705.99..283897.62 rows=76652 width=376) (actual time=56765.846..56766.006 rows=1000 loops=1)\"\n\" Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n\" Sort Method: top-N heapsort Memory: 280kB\"\n\" -> GroupAggregate (cost=221247.80..279503.25 rows=76652 width=376) (actual time=50731.735..56739.181 rows=23630 loops=1)\"\n\" -> Sort (cost=221247.80..223164.10 rows=766519 width=376) (actual time=50731.687..54455.528 rows=737662 loops=1)\"\n\" Sort Key: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang\"\n\" Sort Method: external merge Disk: 71872kB\"\n\" -> CTE Scan on qry1 (cost=0.00..15330.38 rows=766519 width=376) (actual time=87.346..2577.066 rows=737662 loops=1)\"\n\"Total runtime: 56787.136 ms\"\n\n\nHope you can help.\n\n\n\nOn Dec 1, 2013, at 4:35 PM, Andreas Kretschmer wrote:\n\n> Hengky Liwandouw <[email protected]> wrote:\n> \n>> Thanks Adreas,\n>> \n>> Already try your suggestion but it not help. This is the index i created :\n>> \n>> CREATE INDEX tbltransaksi_idx10 ON public.tbltransaksi\n>> USING btree ((date_part('year'::text, tanggal)));\n> \n> I wrote:\n> \n>> create index xxx on public.tbltransaksi((extract(year from\n>> tanggal))) where jualid is not null or returjualid is not null;\n> \n> 2 lines, with the where-condition ;-)\n> \n> Your explain isn't a explain ANALYSE, and it's not for the 2nd query\n> (with condition on returjualid)\n> \n> Do you have propper indexes on tblsupplier.id and tblproduk.produkid?\n> \n> I see seq-scans there...\n> \n> \n>> \n>> Speed is the same. Here is the analyse result :\n>> \n>> \"Limit (cost=11821.17..11822.13 rows=384 width=376)\"\n>> \" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WH (...)\"\n>> \" CTE qry1\"\n>> \" -> Hash Join (cost=3353.66..11446.48 rows=3831 width=84)\"\n>> \" Output: tbltransaksi.tanggal, date_part('month'::text, (tbltransaksi.tanggal)::timestamp without time zone), tblsupplier.id, tblsupplier.nama, tbltransaksi.kodebarang, tblproduk.namabarang, tbltransaksi.keluar, CASE WHEN (tbltransaksi.discount <= (...)\"\n>> \" Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id)::text)\"\n>> \" -> Hash Join (cost=3331.98..11276.35 rows=3831 width=67)\"\n>> \" Output: tbltransaksi.tanggal, tbltransaksi.kodebarang, tbltransaksi.keluar, tbltransaksi.discount, tbltransaksi.harga, tblproduk.namabarang, tblproduk.supplierid\"\n>> \" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n>> \" -> Bitmap Heap Scan on public.tbltransaksi (cost=79.55..7952.09 rows=3831 width=29)\"\n>> \" Output: tbltransaksi.id, tbltransaksi.tanggal, tbltransaksi.kodebarang, tbltransaksi.masuk, tbltransaksi.keluar, tbltransaksi.satuan, tbltransaksi.keterangan, tbltransaksi.jenis, tbltransaksi.harga, tbltransaksi.discount, tbltransaksi (...)\"\n>> \" Recheck Cond: (date_part('year'::text, (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double precision)\"\n>> \" Filter: (tbltransaksi.jualid IS NOT NULL)\"\n>> \" -> Bitmap Index Scan on tbltransaksi_idx10 (cost=0.00..78.59 rows=4022 width=0)\"\n>> \" Index Cond: (date_part('year'::text, (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double precision)\"\n>> \" -> Hash (cost=2188.30..2188.30 rows=85130 width=51)\"\n>> \" Output: tblproduk.namabarang, tblproduk.produkid, tblproduk.supplierid\"\n>> \" -> Seq Scan on public.tblproduk (cost=0.00..2188.30 rows=85130 width=51)\"\n>> \" Output: tblproduk.namabarang, tblproduk.produkid, tblproduk.supplierid\"\n>> \" -> Hash (cost=14.08..14.08 rows=608 width=26)\"\n>> \" Output: tblsupplier.id, tblsupplier.nama\"\n>> \" -> Seq Scan on public.tblsupplier (cost=0.00..14.08 rows=608 width=26)\"\n>> \" Output: tblsupplier.id, tblsupplier.nama\"\n>> \" -> Sort (cost=374.69..375.65 rows=384 width=376)\"\n>> \" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(C (...)\"\n>> \" Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n>> \" -> HashAggregate (cost=354.37..358.21 rows=384 width=376)\"\n>> \" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END), sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END), sum( (...)\"\n>> \" -> CTE Scan on qry1 (cost=0.00..76.62 rows=3831 width=376)\"\n>> \" Output: qry1.tanggal, qry1.bulan, qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, qry1.keluar, qry1.jumlah\"\n>> \n>> On Dec 1, 2013, at 3:12 PM, Andreas Kretschmer wrote:\n>> \n>>> Hengky Liwandouw <[email protected]> wrote:\n>>>> \n>>>> But the problem is : when i change the where clause to :\n>>>> \n>>>> where jualid is not null or returjualid is not null\n>>>> and extract(year from tanggal)='2013')\n>>> \n>>> Try to create this index:\n>>> \n>>> create index xxx on public.tbltransaksi((extract(year from tanggal)))\n>>> where jualid is not null or returjualid is not null;\n>>> \n>>> an run the query again, and if this not helps show us explain analyse,\n>>> you can use explain.depesz.com to provide us the plan.\n>>> \n>>> \n>>> Andreas\n>>> -- \n>>> Really, I'm not out to destroy Microsoft. That will just be a completely\n>>> unintentional side effect. (Linus Torvalds)\n>>> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n>>> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n>>> \n>>> \n>>> -- \n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n> \n> \n> Andreas\n> -- \n> Really, I'm not out to destroy Microsoft. That will just be a completely\n> unintentional side effect. (Linus Torvalds)\n> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 17:07:29 +0800", "msg_from": "Hengky Liwandouw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Hello,\n your problem seems to arises from the sort that id sone to\ndisk :\n\n\" -> Sort (cost=221247.80..223164.10 rows=766519 width=376)\n(actual time=50731.687..54455.528 rows=737662 loops=1)\"\n\" Sort Key: qry1.id, qry1.nama, qry1.kodebarang,\nqry1.namabarang\"\n\" Sort Method: external merge Disk: 71872kB\"\n\" -> CTE Scan on qry1 (cost=0.00..15330.38 rows=766519\nwidth=376) (actual time=87.346..2577.066 rows=737662 loops=1)\"\n infact the qry1 is builded in 2.5 seconds but for the sort it neeses\naround 50 seconds.\nTry to increase work_mem to almost 100 MB and see if the sort will done in\nmemory.\n\nit's better you use the explain (analyze,buffers) so we could see the\nnumber of buffers hitted in shared memory.\n\nCould you post how much big in Mb are this tables ?\n\nMat\n\n\n2013/12/1 Hengky Liwandouw <[email protected]>\n\n> Ok, i just recreate the index :\n>\n> CREATE INDEX tbltransaksi_idx10\n> ON tbltransaksi\n> USING btree\n> (date_part('year'::text, tanggal))\n> WHERE jualid IS NOT NULL OR returjualid IS NOT NULL;\n>\n> (PGAdminIII always convert extract(year from tanggal) to\n> date_part('year'::text,tanggal))\n>\n> This is the product table\n>\n> CREATE TABLE public.tblproduk (\n> produkid VARCHAR(20) NOT NULL,\n> namabarang VARCHAR(50),\n> hargajual NUMERIC(15,2) DEFAULT 0,\n> subkategoriid VARCHAR(10),\n> createby VARCHAR(10),\n> kodepromo VARCHAR(10),\n> satuan VARCHAR(5),\n> foto BYTEA,\n> pajak BOOLEAN,\n> listingfee BOOLEAN,\n> supplierid VARCHAR(20),\n> modifyby VARCHAR(10),\n> qtygrosir INTEGER DEFAULT 0,\n> hargagrosir NUMERIC(15,2) DEFAULT 0,\n> diskonjual NUMERIC(5,2) DEFAULT 0,\n> modal NUMERIC(15,2) DEFAULT 0,\n> CONSTRAINT tblproduk_pkey PRIMARY KEY(produkid)\n> )\n> WITH (oids = false);\n>\n> CREATE INDEX tblproduk_idx ON public.tblproduk\n> USING btree (namabarang COLLATE pg_catalog.\"default\");\n>\n> CREATE INDEX tblproduk_idx1 ON public.tblproduk\n> USING btree (supplierid COLLATE pg_catalog.\"default\");\n>\n> CREATE INDEX tblproduk_idx2 ON public.tblproduk\n> USING btree (subkategoriid COLLATE pg_catalog.\"default\");\n>\n>\n> Supplier table :\n>\n> CREATE TABLE public.tblsupplier (\n> id VARCHAR(20) NOT NULL,\n> nama VARCHAR(50),\n> alamat VARCHAR(50),\n> telepon VARCHAR(50),\n> kontak VARCHAR(50),\n> email VARCHAR(50),\n> kota VARCHAR(50),\n> hp VARCHAR(50),\n> createby VARCHAR(10),\n> modifyby VARCHAR(10),\n> CONSTRAINT tblsupplier_pkey PRIMARY KEY(id)\n> )\n> WITH (oids = false);\n>\n> CREATE INDEX tblsupplier_idx ON public.tblsupplier\n> USING btree (nama COLLATE pg_catalog.\"default\");\n>\n> CREATE INDEX tblsupplier_idx1 ON public.tblsupplier\n> USING btree (kota COLLATE pg_catalog.\"default\");\n>\n> Transaksi table :\n>\n> CREATE TABLE public.tbltransaksi (\n> id INTEGER NOT NULL,\n> tanggal DATE,\n> kodebarang VARCHAR(20),\n> masuk NUMERIC(10,2) DEFAULT 0,\n> keluar NUMERIC(10,2) DEFAULT 0,\n> satuan VARCHAR(5),\n> keterangan VARCHAR(30),\n> jenis VARCHAR(5),\n> harga NUMERIC(15,2) DEFAULT 0,\n> discount NUMERIC(10,2) DEFAULT 0,\n> jualid INTEGER,\n> beliid INTEGER,\n> mutasiid INTEGER,\n> nobukti VARCHAR(20),\n> customerid VARCHAR(20),\n> modal NUMERIC(15,2) DEFAULT 0,\n> awalid INTEGER,\n> terimabrgid INTEGER,\n> opnameid INTEGER,\n> returjualid INTEGER,\n> returbeliid INTEGER,\n> CONSTRAINT tbltransaksi_pkey PRIMARY KEY(id),\n> CONSTRAINT tbltransaksi_fk FOREIGN KEY (returjualid)\n> REFERENCES public.tblreturjual(id)\n> ON DELETE CASCADE\n> ON UPDATE NO ACTION\n> DEFERRABLE\n> INITIALLY IMMEDIATE,\n> CONSTRAINT tbltransaksi_fk1 FOREIGN KEY (jualid)\n> REFERENCES public.tblpenjualan(id)\n> ON DELETE CASCADE\n> ON UPDATE NO ACTION\n> NOT DEFERRABLE,\n> CONSTRAINT tbltransaksi_fk2 FOREIGN KEY (beliid)\n> REFERENCES public.tblpembelian(id)\n> ON DELETE CASCADE\n> ON UPDATE NO ACTION\n> NOT DEFERRABLE,\n> CONSTRAINT tbltransaksi_fk3 FOREIGN KEY (mutasiid)\n> REFERENCES public.tblmutasi(id)\n> ON DELETE CASCADE\n> ON UPDATE NO ACTION\n> NOT DEFERRABLE,\n> CONSTRAINT tbltransaksi_fk4 FOREIGN KEY (returbeliid)\n> REFERENCES public.tblreturbeli(id)\n> ON DELETE CASCADE\n> ON UPDATE NO ACTION\n> NOT DEFERRABLE\n> )\n> WITH (oids = false);\n>\n> CREATE INDEX tbltransaksi_idx ON public.tbltransaksi\n> USING btree (tanggal);\n>\n> CREATE INDEX tbltransaksi_idx1 ON public.tbltransaksi\n> USING btree (kodebarang COLLATE pg_catalog.\"default\");\n>\n> CREATE INDEX tbltransaksi_idx10 ON public.tbltransaksi\n> USING btree ((date_part('year'::text, tanggal)))\n> WHERE ((jualid IS NOT NULL) OR (returjualid IS NOT NULL));\n>\n> CREATE INDEX tbltransaksi_idx2 ON public.tbltransaksi\n> USING btree (customerid COLLATE pg_catalog.\"default\");\n>\n> CREATE INDEX tbltransaksi_idx3 ON public.tbltransaksi\n> USING btree (awalid);\n>\n> CREATE INDEX tbltransaksi_idx4 ON public.tbltransaksi\n> USING btree (jualid);\n>\n> CREATE INDEX tbltransaksi_idx5 ON public.tbltransaksi\n> USING btree (beliid);\n>\n> CREATE INDEX tbltransaksi_idx6 ON public.tbltransaksi\n> USING btree (mutasiid);\n>\n> CREATE INDEX tbltransaksi_idx7 ON public.tbltransaksi\n> USING btree (opnameid);\n>\n> CREATE INDEX tbltransaksi_idx8 ON public.tbltransaksi\n> USING btree (returjualid);\n>\n> CREATE INDEX tbltransaksi_idx9 ON public.tbltransaksi\n> USING btree (returbeliid);\n>\n>\n> the query that run slow:\n>\n> with qry1 as\n> (select tanggal, extract(month from tanggal) as bulan, tblsupplier.id,\n> nama, kodebarang, namabarang, keluar,\n> case when discount<=100 then\n> keluar*(harga -(discount/100*harga))\n> when tbltransaksi.discount>100 then\n> keluar*(harga-discount)\n> end\n> as jumlah\n> from tbltransaksi\n> join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n> join tblsupplier on tblproduk.supplierid=tblsupplier.id\n> where jualid is not null or returjualid is not null\n> and extract(year from tanggal)='2013')\n>\n> select\n> id, nama, kodebarang, namabarang,\n> sum(case when bulan = 1 then keluar else 0 end) as Jan,\n> sum(case when bulan = 2 then keluar else 0 end) as Feb,\n> sum(case when bulan = 3 then keluar else 0 end) as Maret,\n> sum(case when bulan = 4 then keluar else 0 end) as April,\n> sum(case when bulan = 5 then keluar else 0 end) as Mei,\n> sum(case when bulan = 6 then keluar else 0 end) as Juni,\n> sum(case when bulan = 7 then keluar else 0 end) as Juli,\n> sum(case when bulan = 8 then keluar else 0 end) as Agust,\n> sum(case when bulan = 9 then keluar else 0 end) as Sept,\n> sum(case when bulan = 10 then keluar else 0 end) as Okt,\n> sum(case when bulan = 11 then keluar else 0 end) as Nov,\n> sum(case when bulan = 12 then keluar else 0 end) as Des,\n> sum(coalesce(keluar,0)) as total\n> from qry1\n> group by id, nama, kodebarang, namabarang\n> order by total desc\n> limit 1000\n>\n> this is the explain analyse :\n>\n> \"Limit (cost=346389.90..346392.40 rows=1000 width=376) (actual\n> time=56765.848..56766.229 rows=1000 loops=1)\"\n> \" CTE qry1\"\n> \" -> Hash Join (cost=4444.64..62683.91 rows=766519 width=84) (actual\n> time=87.342..1786.851 rows=737662 loops=1)\"\n> \" Hash Cond: ((tbltransaksi.kodebarang)::text =\n> (tblproduk.produkid)::text)\"\n> \" -> Seq Scan on tbltransaksi (cost=0.00..24704.06 rows=766519\n> width=29) (actual time=0.010..271.147 rows=767225 loops=1)\"\n> \" Filter: ((jualid IS NOT NULL) OR ((returjualid IS NOT\n> NULL) AND (date_part('year'::text, (tanggal)::timestamp without time zone)\n> = 2013::double precision)))\"\n> \" Rows Removed by Filter: 37441\"\n> \" -> Hash (cost=3380.52..3380.52 rows=85130 width=68) (actual\n> time=87.265..87.265 rows=65219 loops=1)\"\n> \" Buckets: 16384 Batches: 1 Memory Usage: 5855kB\"\n> \" -> Hash Join (cost=21.68..3380.52 rows=85130 width=68)\n> (actual time=0.748..59.469 rows=65219 loops=1)\"\n> \" Hash Cond: ((tblproduk.supplierid)::text = (\n> tblsupplier.id)::text)\"\n> \" -> Seq Scan on tblproduk (cost=0.00..2188.30\n> rows=85130 width=51) (actual time=0.005..17.184 rows=85034 loops=1)\"\n> \" -> Hash (cost=14.08..14.08 rows=608 width=26)\n> (actual time=0.730..0.730 rows=609 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 28kB\"\n> \" -> Seq Scan on tblsupplier\n> (cost=0.00..14.08 rows=608 width=26) (actual time=0.006..0.298 rows=609\n> loops=1)\"\n> \" -> Sort (cost=283705.99..283897.62 rows=76652 width=376) (actual\n> time=56765.846..56766.006 rows=1000 loops=1)\"\n> \" Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n> \" Sort Method: top-N heapsort Memory: 280kB\"\n> \" -> GroupAggregate (cost=221247.80..279503.25 rows=76652\n> width=376) (actual time=50731.735..56739.181 rows=23630 loops=1)\"\n> \" -> Sort (cost=221247.80..223164.10 rows=766519 width=376)\n> (actual time=50731.687..54455.528 rows=737662 loops=1)\"\n> \" Sort Key: qry1.id, qry1.nama, qry1.kodebarang,\n> qry1.namabarang\"\n> \" Sort Method: external merge Disk: 71872kB\"\n> \" -> CTE Scan on qry1 (cost=0.00..15330.38\n> rows=766519 width=376) (actual time=87.346..2577.066 rows=737662 loops=1)\"\n> \"Total runtime: 56787.136 ms\"\n>\n>\n> Hope you can help.\n>\n>\n>\n> On Dec 1, 2013, at 4:35 PM, Andreas Kretschmer wrote:\n>\n> > Hengky Liwandouw <[email protected]> wrote:\n> >\n> >> Thanks Adreas,\n> >>\n> >> Already try your suggestion but it not help. This is the index i\n> created :\n> >>\n> >> CREATE INDEX tbltransaksi_idx10 ON public.tbltransaksi\n> >> USING btree ((date_part('year'::text, tanggal)));\n> >\n> > I wrote:\n> >\n> >> create index xxx on public.tbltransaksi((extract(year from\n> >> tanggal))) where jualid is not null or returjualid is not null;\n> >\n> > 2 lines, with the where-condition ;-)\n> >\n> > Your explain isn't a explain ANALYSE, and it's not for the 2nd query\n> > (with condition on returjualid)\n> >\n> > Do you have propper indexes on tblsupplier.id and tblproduk.produkid?\n> >\n> > I see seq-scans there...\n> >\n> >\n> >>\n> >> Speed is the same. Here is the analyse result :\n> >>\n> >> \"Limit (cost=11821.17..11822.13 rows=384 width=376)\"\n> >> \" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang,\n> (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE\n> 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN\n> qry1.keluar ELSE 0::numeric END)), (sum(CASE WH (...)\"\n> >> \" CTE qry1\"\n> >> \" -> Hash Join (cost=3353.66..11446.48 rows=3831 width=84)\"\n> >> \" Output: tbltransaksi.tanggal, date_part('month'::text,\n> (tbltransaksi.tanggal)::timestamp without time zone), tblsupplier.id,\n> tblsupplier.nama, tbltransaksi.kodebarang, tblproduk.namabarang,\n> tbltransaksi.keluar, CASE WHEN (tbltransaksi.discount <= (...)\"\n> >> \" Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id\n> )::text)\"\n> >> \" -> Hash Join (cost=3331.98..11276.35 rows=3831 width=67)\"\n> >> \" Output: tbltransaksi.tanggal, tbltransaksi.kodebarang,\n> tbltransaksi.keluar, tbltransaksi.discount, tbltransaksi.harga,\n> tblproduk.namabarang, tblproduk.supplierid\"\n> >> \" Hash Cond: ((tbltransaksi.kodebarang)::text =\n> (tblproduk.produkid)::text)\"\n> >> \" -> Bitmap Heap Scan on public.tbltransaksi\n> (cost=79.55..7952.09 rows=3831 width=29)\"\n> >> \" Output: tbltransaksi.id, tbltransaksi.tanggal,\n> tbltransaksi.kodebarang, tbltransaksi.masuk, tbltransaksi.keluar,\n> tbltransaksi.satuan, tbltransaksi.keterangan, tbltransaksi.jenis,\n> tbltransaksi.harga, tbltransaksi.discount, tbltransaksi (...)\"\n> >> \" Recheck Cond: (date_part('year'::text,\n> (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double\n> precision)\"\n> >> \" Filter: (tbltransaksi.jualid IS NOT NULL)\"\n> >> \" -> Bitmap Index Scan on tbltransaksi_idx10\n> (cost=0.00..78.59 rows=4022 width=0)\"\n> >> \" Index Cond: (date_part('year'::text,\n> (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double\n> precision)\"\n> >> \" -> Hash (cost=2188.30..2188.30 rows=85130 width=51)\"\n> >> \" Output: tblproduk.namabarang,\n> tblproduk.produkid, tblproduk.supplierid\"\n> >> \" -> Seq Scan on public.tblproduk\n> (cost=0.00..2188.30 rows=85130 width=51)\"\n> >> \" Output: tblproduk.namabarang,\n> tblproduk.produkid, tblproduk.supplierid\"\n> >> \" -> Hash (cost=14.08..14.08 rows=608 width=26)\"\n> >> \" Output: tblsupplier.id, tblsupplier.nama\"\n> >> \" -> Seq Scan on public.tblsupplier (cost=0.00..14.08\n> rows=608 width=26)\"\n> >> \" Output: tblsupplier.id, tblsupplier.nama\"\n> >> \" -> Sort (cost=374.69..375.65 rows=384 width=376)\"\n> >> \" Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang,\n> (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE\n> 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN\n> qry1.keluar ELSE 0::numeric END)), (sum(C (...)\"\n> >> \" Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n> >> \" -> HashAggregate (cost=354.37..358.21 rows=384 width=376)\"\n> >> \" Output: qry1.id, qry1.nama, qry1.kodebarang,\n> qry1.namabarang, sum(CASE WHEN (qry1.bulan = 1::double precision) THEN\n> qry1.keluar ELSE 0::numeric END), sum(CASE WHEN (qry1.bulan = 2::double\n> precision) THEN qry1.keluar ELSE 0::numeric END), sum( (...)\"\n> >> \" -> CTE Scan on qry1 (cost=0.00..76.62 rows=3831\n> width=376)\"\n> >> \" Output: qry1.tanggal, qry1.bulan, qry1.id,\n> qry1.nama, qry1.kodebarang, qry1.namabarang, qry1.keluar, qry1.jumlah\"\n> >>\n> >> On Dec 1, 2013, at 3:12 PM, Andreas Kretschmer wrote:\n> >>\n> >>> Hengky Liwandouw <[email protected]> wrote:\n> >>>>\n> >>>> But the problem is : when i change the where clause to :\n> >>>>\n> >>>> where jualid is not null or returjualid is not null\n> >>>> and extract(year from tanggal)='2013')\n> >>>\n> >>> Try to create this index:\n> >>>\n> >>> create index xxx on public.tbltransaksi((extract(year from tanggal)))\n> >>> where jualid is not null or returjualid is not null;\n> >>>\n> >>> an run the query again, and if this not helps show us explain analyse,\n> >>> you can use explain.depesz.com to provide us the plan.\n> >>>\n> >>>\n> >>> Andreas\n> >>> --\n> >>> Really, I'm not out to destroy Microsoft. That will just be a\n> completely\n> >>> unintentional side effect. (Linus\n> Torvalds)\n> >>> \"If I was god, I would recompile penguin with --enable-fly.\"\n> (unknown)\n> >>> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E\n> 13.56889°\n> >>>\n> >>>\n> >>> --\n> >>> Sent via pgsql-performance mailing list (\n> [email protected])\n> >>> To make changes to your subscription:\n> >>> http://www.postgresql.org/mailpref/pgsql-performance\n> >>\n> >\n> >\n> > Andreas\n> > --\n> > Really, I'm not out to destroy Microsoft. That will just be a completely\n> > unintentional side effect. (Linus Torvalds)\n> > \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n> > Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHello,              your problem seems to arises from  the sort that id sone to disk :\"              ->  Sort  (cost=221247.80..223164.10 rows=766519 \nwidth=376) (actual time=50731.687..54455.528 rows=737662 loops=1)\"\n\"                    Sort Key: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang\"\n\"                    Sort Method: external merge  Disk: 71872kB\"\"                    ->  CTE Scan on qry1  (cost=0.00..15330.38 \nrows=766519 width=376) (actual time=87.346..2577.066 rows=737662 \nloops=1)\" infact the qry1 is builded in 2.5 seconds but for the sort it neeses around 50 seconds.Try to increase work_mem to almost 100 MB and see if the sort will done in memory.it's better you use the explain (analyze,buffers) so we could see  the number of buffers hitted in shared memory.\n\nCould you post how much big in Mb are this tables ? Mat\n2013/12/1 Hengky Liwandouw <[email protected]>\nOk, i just recreate the index :\n\nCREATE INDEX tbltransaksi_idx10\n  ON tbltransaksi\n  USING btree\n  (date_part('year'::text, tanggal))\n  WHERE jualid IS NOT NULL OR returjualid IS NOT NULL;\n\n(PGAdminIII always convert extract(year from tanggal) to date_part('year'::text,tanggal))\n\nThis is the product table\n\nCREATE TABLE public.tblproduk (\n  produkid VARCHAR(20) NOT NULL,\n  namabarang VARCHAR(50),\n  hargajual NUMERIC(15,2) DEFAULT 0,\n  subkategoriid VARCHAR(10),\n  createby VARCHAR(10),\n  kodepromo VARCHAR(10),\n  satuan VARCHAR(5),\n  foto BYTEA,\n  pajak BOOLEAN,\n  listingfee BOOLEAN,\n  supplierid VARCHAR(20),\n  modifyby VARCHAR(10),\n  qtygrosir INTEGER DEFAULT 0,\n  hargagrosir NUMERIC(15,2) DEFAULT 0,\n  diskonjual NUMERIC(5,2) DEFAULT 0,\n  modal NUMERIC(15,2) DEFAULT 0,\n  CONSTRAINT tblproduk_pkey PRIMARY KEY(produkid)\n)\nWITH (oids = false);\n\nCREATE INDEX tblproduk_idx ON public.tblproduk\n  USING btree (namabarang COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tblproduk_idx1 ON public.tblproduk\n  USING btree (supplierid COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tblproduk_idx2 ON public.tblproduk\n  USING btree (subkategoriid COLLATE pg_catalog.\"default\");\n\n\nSupplier table :\n\nCREATE TABLE public.tblsupplier (\n  id VARCHAR(20) NOT NULL,\n  nama VARCHAR(50),\n  alamat VARCHAR(50),\n  telepon VARCHAR(50),\n  kontak VARCHAR(50),\n  email VARCHAR(50),\n  kota VARCHAR(50),\n  hp VARCHAR(50),\n  createby VARCHAR(10),\n  modifyby VARCHAR(10),\n  CONSTRAINT tblsupplier_pkey PRIMARY KEY(id)\n)\nWITH (oids = false);\n\nCREATE INDEX tblsupplier_idx ON public.tblsupplier\n  USING btree (nama COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tblsupplier_idx1 ON public.tblsupplier\n  USING btree (kota COLLATE pg_catalog.\"default\");\n\nTransaksi table :\n\nCREATE TABLE public.tbltransaksi (\n  id INTEGER NOT NULL,\n  tanggal DATE,\n  kodebarang VARCHAR(20),\n  masuk NUMERIC(10,2) DEFAULT 0,\n  keluar NUMERIC(10,2) DEFAULT 0,\n  satuan VARCHAR(5),\n  keterangan VARCHAR(30),\n  jenis VARCHAR(5),\n  harga NUMERIC(15,2) DEFAULT 0,\n  discount NUMERIC(10,2) DEFAULT 0,\n  jualid INTEGER,\n  beliid INTEGER,\n  mutasiid INTEGER,\n  nobukti VARCHAR(20),\n  customerid VARCHAR(20),\n  modal NUMERIC(15,2) DEFAULT 0,\n  awalid INTEGER,\n  terimabrgid INTEGER,\n  opnameid INTEGER,\n  returjualid INTEGER,\n  returbeliid INTEGER,\n  CONSTRAINT tbltransaksi_pkey PRIMARY KEY(id),\n  CONSTRAINT tbltransaksi_fk FOREIGN KEY (returjualid)\n    REFERENCES public.tblreturjual(id)\n    ON DELETE CASCADE\n    ON UPDATE NO ACTION\n    DEFERRABLE\n    INITIALLY IMMEDIATE,\n  CONSTRAINT tbltransaksi_fk1 FOREIGN KEY (jualid)\n    REFERENCES public.tblpenjualan(id)\n    ON DELETE CASCADE\n    ON UPDATE NO ACTION\n    NOT DEFERRABLE,\n  CONSTRAINT tbltransaksi_fk2 FOREIGN KEY (beliid)\n    REFERENCES public.tblpembelian(id)\n    ON DELETE CASCADE\n    ON UPDATE NO ACTION\n    NOT DEFERRABLE,\n  CONSTRAINT tbltransaksi_fk3 FOREIGN KEY (mutasiid)\n    REFERENCES public.tblmutasi(id)\n    ON DELETE CASCADE\n    ON UPDATE NO ACTION\n    NOT DEFERRABLE,\n  CONSTRAINT tbltransaksi_fk4 FOREIGN KEY (returbeliid)\n    REFERENCES public.tblreturbeli(id)\n    ON DELETE CASCADE\n    ON UPDATE NO ACTION\n    NOT DEFERRABLE\n)\nWITH (oids = false);\n\nCREATE INDEX tbltransaksi_idx ON public.tbltransaksi\n  USING btree (tanggal);\n\nCREATE INDEX tbltransaksi_idx1 ON public.tbltransaksi\n  USING btree (kodebarang COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tbltransaksi_idx10 ON public.tbltransaksi\n  USING btree ((date_part('year'::text, tanggal)))\n  WHERE ((jualid IS NOT NULL) OR (returjualid IS NOT NULL));\n\nCREATE INDEX tbltransaksi_idx2 ON public.tbltransaksi\n  USING btree (customerid COLLATE pg_catalog.\"default\");\n\nCREATE INDEX tbltransaksi_idx3 ON public.tbltransaksi\n  USING btree (awalid);\n\nCREATE INDEX tbltransaksi_idx4 ON public.tbltransaksi\n  USING btree (jualid);\n\nCREATE INDEX tbltransaksi_idx5 ON public.tbltransaksi\n  USING btree (beliid);\n\nCREATE INDEX tbltransaksi_idx6 ON public.tbltransaksi\n  USING btree (mutasiid);\n\nCREATE INDEX tbltransaksi_idx7 ON public.tbltransaksi\n  USING btree (opnameid);\n\nCREATE INDEX tbltransaksi_idx8 ON public.tbltransaksi\n  USING btree (returjualid);\n\nCREATE INDEX tbltransaksi_idx9 ON public.tbltransaksi\n  USING btree (returbeliid);\n\n\nthe query that run slow:\n\nwith qry1 as\n(select tanggal, extract(month from tanggal) as bulan, tblsupplier.id, nama, kodebarang, namabarang, keluar,\n        case when discount<=100 then\n            keluar*(harga -(discount/100*harga))\n        when tbltransaksi.discount>100 then\n                keluar*(harga-discount)\n        end\n    as jumlah\nfrom tbltransaksi\njoin tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\njoin tblsupplier on tblproduk.supplierid=tblsupplier.id\nwhere jualid is not null or returjualid is not null\nand extract(year from tanggal)='2013')\n\nselect\n  id, nama, kodebarang, namabarang,\n  sum(case when bulan = 1 then keluar else 0 end) as Jan,\n  sum(case when bulan = 2 then keluar else 0 end) as Feb,\n  sum(case when bulan = 3 then keluar else 0 end) as Maret,\n  sum(case when bulan = 4 then keluar else 0 end) as April,\n  sum(case when bulan = 5 then keluar else 0 end) as Mei,\n  sum(case when bulan = 6 then keluar else 0 end) as Juni,\n  sum(case when bulan = 7 then keluar else 0 end) as Juli,\n  sum(case when bulan = 8 then keluar else 0 end) as Agust,\n  sum(case when bulan = 9 then keluar else 0 end) as Sept,\n  sum(case when bulan = 10 then keluar else 0 end) as Okt,\n  sum(case when bulan = 11 then keluar else 0 end) as Nov,\n  sum(case when bulan = 12 then keluar else 0 end) as Des,\n  sum(coalesce(keluar,0)) as total\nfrom qry1\ngroup by id, nama, kodebarang, namabarang\norder by total desc\nlimit 1000\n\nthis is the explain analyse :\n\n\"Limit  (cost=346389.90..346392.40 rows=1000 width=376) (actual time=56765.848..56766.229 rows=1000 loops=1)\"\n\"  CTE qry1\"\n\"    ->  Hash Join  (cost=4444.64..62683.91 rows=766519 width=84) (actual time=87.342..1786.851 rows=737662 loops=1)\"\n\"          Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n\"          ->  Seq Scan on tbltransaksi  (cost=0.00..24704.06 rows=766519 width=29) (actual time=0.010..271.147 rows=767225 loops=1)\"\n\"                Filter: ((jualid IS NOT NULL) OR ((returjualid IS NOT NULL) AND (date_part('year'::text, (tanggal)::timestamp without time zone) = 2013::double precision)))\"\n\"                Rows Removed by Filter: 37441\"\n\"          ->  Hash  (cost=3380.52..3380.52 rows=85130 width=68) (actual time=87.265..87.265 rows=65219 loops=1)\"\n\"                Buckets: 16384  Batches: 1  Memory Usage: 5855kB\"\n\"                ->  Hash Join  (cost=21.68..3380.52 rows=85130 width=68) (actual time=0.748..59.469 rows=65219 loops=1)\"\n\"                      Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id)::text)\"\n\"                      ->  Seq Scan on tblproduk  (cost=0.00..2188.30 rows=85130 width=51) (actual time=0.005..17.184 rows=85034 loops=1)\"\n\"                      ->  Hash  (cost=14.08..14.08 rows=608 width=26) (actual time=0.730..0.730 rows=609 loops=1)\"\n\"                            Buckets: 1024  Batches: 1  Memory Usage: 28kB\"\n\"                            ->  Seq Scan on tblsupplier  (cost=0.00..14.08 rows=608 width=26) (actual time=0.006..0.298 rows=609 loops=1)\"\n\"  ->  Sort  (cost=283705.99..283897.62 rows=76652 width=376) (actual time=56765.846..56766.006 rows=1000 loops=1)\"\n\"        Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n\"        Sort Method: top-N heapsort  Memory: 280kB\"\n\"        ->  GroupAggregate  (cost=221247.80..279503.25 rows=76652 width=376) (actual time=50731.735..56739.181 rows=23630 loops=1)\"\n\"              ->  Sort  (cost=221247.80..223164.10 rows=766519 width=376) (actual time=50731.687..54455.528 rows=737662 loops=1)\"\n\"                    Sort Key: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang\"\n\"                    Sort Method: external merge  Disk: 71872kB\"\n\"                    ->  CTE Scan on qry1  (cost=0.00..15330.38 rows=766519 width=376) (actual time=87.346..2577.066 rows=737662 loops=1)\"\n\"Total runtime: 56787.136 ms\"\n\n\nHope you can help.\n\n\n\nOn Dec 1, 2013, at 4:35 PM, Andreas Kretschmer wrote:\n\n> Hengky Liwandouw <[email protected]> wrote:\n>\n>> Thanks Adreas,\n>>\n>> Already try your suggestion but it not help.  This is the index i created :\n>>\n>> CREATE INDEX tbltransaksi_idx10 ON public.tbltransaksi\n>>  USING btree ((date_part('year'::text, tanggal)));\n>\n> I wrote:\n>\n>> create index xxx on public.tbltransaksi((extract(year from\n>> tanggal))) where jualid is not null or returjualid is not null;\n>\n> 2 lines, with the where-condition ;-)\n>\n> Your explain isn't a explain ANALYSE, and it's not for the 2nd query\n> (with condition on returjualid)\n>\n> Do you have propper indexes on tblsupplier.id and tblproduk.produkid?\n>\n> I see seq-scans there...\n>\n>\n>>\n>> Speed is the same. Here is the analyse result :\n>>\n>> \"Limit  (cost=11821.17..11822.13 rows=384 width=376)\"\n>> \"  Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WH (...)\"\n\n>> \"  CTE qry1\"\n>> \"    ->  Hash Join  (cost=3353.66..11446.48 rows=3831 width=84)\"\n>> \"          Output: tbltransaksi.tanggal, date_part('month'::text, (tbltransaksi.tanggal)::timestamp without time zone), tblsupplier.id, tblsupplier.nama, tbltransaksi.kodebarang, tblproduk.namabarang, tbltransaksi.keluar, CASE WHEN (tbltransaksi.discount <= (...)\"\n\n>> \"          Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id)::text)\"\n>> \"          ->  Hash Join  (cost=3331.98..11276.35 rows=3831 width=67)\"\n>> \"                Output: tbltransaksi.tanggal, tbltransaksi.kodebarang, tbltransaksi.keluar, tbltransaksi.discount, tbltransaksi.harga, tblproduk.namabarang, tblproduk.supplierid\"\n>> \"                Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n>> \"                ->  Bitmap Heap Scan on public.tbltransaksi  (cost=79.55..7952.09 rows=3831 width=29)\"\n>> \"                      Output: tbltransaksi.id, tbltransaksi.tanggal, tbltransaksi.kodebarang, tbltransaksi.masuk, tbltransaksi.keluar, tbltransaksi.satuan, tbltransaksi.keterangan, tbltransaksi.jenis, tbltransaksi.harga, tbltransaksi.discount, tbltransaksi (...)\"\n\n>> \"                      Recheck Cond: (date_part('year'::text, (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double precision)\"\n>> \"                      Filter: (tbltransaksi.jualid IS NOT NULL)\"\n>> \"                      ->  Bitmap Index Scan on tbltransaksi_idx10  (cost=0.00..78.59 rows=4022 width=0)\"\n>> \"                            Index Cond: (date_part('year'::text, (tbltransaksi.tanggal)::timestamp without time zone) = 2013::double precision)\"\n>> \"                ->  Hash  (cost=2188.30..2188.30 rows=85130 width=51)\"\n>> \"                      Output: tblproduk.namabarang, tblproduk.produkid, tblproduk.supplierid\"\n>> \"                      ->  Seq Scan on public.tblproduk  (cost=0.00..2188.30 rows=85130 width=51)\"\n>> \"                            Output: tblproduk.namabarang, tblproduk.produkid, tblproduk.supplierid\"\n>> \"          ->  Hash  (cost=14.08..14.08 rows=608 width=26)\"\n>> \"                Output: tblsupplier.id, tblsupplier.nama\"\n>> \"                ->  Seq Scan on public.tblsupplier  (cost=0.00..14.08 rows=608 width=26)\"\n>> \"                      Output: tblsupplier.id, tblsupplier.nama\"\n>> \"  ->  Sort  (cost=374.69..375.65 rows=384 width=376)\"\n>> \"        Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, (sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END)), (sum(C (...)\"\n\n>> \"        Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n>> \"        ->  HashAggregate  (cost=354.37..358.21 rows=384 width=376)\"\n>> \"              Output: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, sum(CASE WHEN (qry1.bulan = 1::double precision) THEN qry1.keluar ELSE 0::numeric END), sum(CASE WHEN (qry1.bulan = 2::double precision) THEN qry1.keluar ELSE 0::numeric END), sum( (...)\"\n\n>> \"              ->  CTE Scan on qry1  (cost=0.00..76.62 rows=3831 width=376)\"\n>> \"                    Output: qry1.tanggal, qry1.bulan, qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang, qry1.keluar, qry1.jumlah\"\n>>\n>> On Dec 1, 2013, at 3:12 PM, Andreas Kretschmer wrote:\n>>\n>>> Hengky Liwandouw <[email protected]> wrote:\n>>>>\n>>>> But the problem is : when i change the where clause to :\n>>>>\n>>>> where jualid is not null or returjualid is not null\n>>>> and extract(year from tanggal)='2013')\n>>>\n>>> Try to create this index:\n>>>\n>>> create index xxx on public.tbltransaksi((extract(year from tanggal)))\n>>> where jualid is not null or returjualid is not null;\n>>>\n>>> an run the query again, and if this not helps show us explain analyse,\n>>> you can use explain.depesz.com to provide us the plan.\n>>>\n>>>\n>>> Andreas\n>>> --\n>>> Really, I'm not out to destroy Microsoft. That will just be a completely\n>>> unintentional side effect.                              (Linus Torvalds)\n>>> \"If I was god, I would recompile penguin with --enable-fly.\"   (unknown)\n>>> Kaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n> Andreas\n> --\n> Really, I'm not out to destroy Microsoft. That will just be a completely\n> unintentional side effect.                              (Linus Torvalds)\n> \"If I was god, I would recompile penguin with --enable-fly.\"   (unknown)\n> Kaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 1 Dec 2013 12:27:04 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "On 01/12/13 10:07, Hengky Liwandouw wrote:\n> with qry1 as \n> (select tanggal, extract(month from tanggal) as bulan, tblsupplier.id, nama, kodebarang, namabarang, keluar, \n> \tcase when discount<=100 then\n> \t keluar*(harga -(discount/100*harga))\n> \twhen tbltransaksi.discount>100 then\n> \t\tkeluar*(harga-discount)\n> \tend \n> as jumlah\n> from tbltransaksi \n> join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n> join tblsupplier on tblproduk.supplierid=tblsupplier.id\n> where jualid is not null or returjualid is not null\n> and extract(year from tanggal)='2013')\n> \n> select \n> id, nama, kodebarang, namabarang,\n> sum(case when bulan = 1 then keluar else 0 end) as Jan,\n> sum(case when bulan = 2 then keluar else 0 end) as Feb,\n> sum(case when bulan = 3 then keluar else 0 end) as Maret,\n> sum(case when bulan = 4 then keluar else 0 end) as April,\n> sum(case when bulan = 5 then keluar else 0 end) as Mei,\n> sum(case when bulan = 6 then keluar else 0 end) as Juni,\n> sum(case when bulan = 7 then keluar else 0 end) as Juli,\n> sum(case when bulan = 8 then keluar else 0 end) as Agust,\n> sum(case when bulan = 9 then keluar else 0 end) as Sept,\n> sum(case when bulan = 10 then keluar else 0 end) as Okt,\n> sum(case when bulan = 11 then keluar else 0 end) as Nov,\n> sum(case when bulan = 12 then keluar else 0 end) as Des,\n> sum(coalesce(keluar,0)) as total\n> from qry1\n> group by id, nama, kodebarang, namabarang\n> order by total desc\n> limit 1000\n> \n> this is the explain analyse :\n> \n> \"Limit (cost=346389.90..346392.40 rows=1000 width=376) (actual time=56765.848..56766.229 rows=1000 loops=1)\"\n> \" CTE qry1\"\n> \" -> Hash Join (cost=4444.64..62683.91 rows=766519 width=84) (actual time=87.342..1786.851 rows=737662 loops=1)\"\n> \" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n> \" -> Seq Scan on tbltransaksi (cost=0.00..24704.06 rows=766519 width=29) (actual time=0.010..271.147 rows=767225 loops=1)\"\n> \" Filter: ((jualid IS NOT NULL) OR ((returjualid IS NOT NULL) AND (date_part('year'::text, (tanggal)::timestamp without time zone) = 2013::double precision)))\"\n> \" Rows Removed by Filter: 37441\"\n> \" -> Hash (cost=3380.52..3380.52 rows=85130 width=68) (actual time=87.265..87.265 rows=65219 loops=1)\"\n> \" Buckets: 16384 Batches: 1 Memory Usage: 5855kB\"\n> \" -> Hash Join (cost=21.68..3380.52 rows=85130 width=68) (actual time=0.748..59.469 rows=65219 loops=1)\"\n> \" Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id)::text)\"\n> \" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=51) (actual time=0.005..17.184 rows=85034 loops=1)\"\n> \" -> Hash (cost=14.08..14.08 rows=608 width=26) (actual time=0.730..0.730 rows=609 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 28kB\"\n> \" -> Seq Scan on tblsupplier (cost=0.00..14.08 rows=608 width=26) (actual time=0.006..0.298 rows=609 loops=1)\"\n> \" -> Sort (cost=283705.99..283897.62 rows=76652 width=376) (actual time=56765.846..56766.006 rows=1000 loops=1)\"\n> \" Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n> \" Sort Method: top-N heapsort Memory: 280kB\"\n> \" -> GroupAggregate (cost=221247.80..279503.25 rows=76652 width=376) (actual time=50731.735..56739.181 rows=23630 loops=1)\"\n> \" -> Sort (cost=221247.80..223164.10 rows=766519 width=376) (actual time=50731.687..54455.528 rows=737662 loops=1)\"\n> \" Sort Key: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang\"\n> \" Sort Method: external merge Disk: 71872kB\"\n> \" -> CTE Scan on qry1 (cost=0.00..15330.38 rows=766519 width=376) (actual time=87.346..2577.066 rows=737662 loops=1)\"\n> \"Total runtime: 56787.136 ms\"\n\nI'd try 2 things:\n\n1) set work_mem to ~100Mb. You don't have to do that globally in\npostgresql.conf. You can set it for the current session only.\n\n set work_mem to '100MB';\n\nThen run your query.\n\n2) change the common table expression to a subquery:\n\nselect\n id, nama, kodebarang, namabarang,\n sum(case when bulan = 1 then keluar else 0 end) as Jan,\n sum(case when bulan = 2 then keluar else 0 end) as Feb,\n sum(case when bulan = 3 then keluar else 0 end) as Maret,\n sum(case when bulan = 4 then keluar else 0 end) as April,\n sum(case when bulan = 5 then keluar else 0 end) as Mei,\n sum(case when bulan = 6 then keluar else 0 end) as Juni,\n sum(case when bulan = 7 then keluar else 0 end) as Juli,\n sum(case when bulan = 8 then keluar else 0 end) as Agust,\n sum(case when bulan = 9 then keluar else 0 end) as Sept,\n sum(case when bulan = 10 then keluar else 0 end) as Okt,\n sum(case when bulan = 11 then keluar else 0 end) as Nov,\n sum(case when bulan = 12 then keluar else 0 end) as Des,\n sum(coalesce(keluar,0)) as total\nfrom (select tanggal, extract(month from tanggal) as bulan,\n tblsupplier.id, nama, kodebarang, namabarang, keluar,\n case\n when discount<=100 then\n keluar*(harga -(discount/100*harga))\n when tbltransaksi.discount>100 then\n keluar*(harga-discount)\n end as jumlah\n from tbltransaksi\n join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n join tblsupplier on tblproduk.supplierid=tblsupplier.id\n where jualid is not null or returjualid is not null\n and extract(year from tanggal)='2013') qry1\ngroup by id, nama, kodebarang, namabarang\norder by total desc\nlimit 1000\n\nSelamat berjaya,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 01 Dec 2013 13:06:51 +0100", "msg_from": "=?ISO-8859-1?Q?Torsten_F=F6rtsch?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Dear All, \n\nThanks for the suggestion ! I tried to change the work_mem and the query only needs 4.9 sec to display the result !\n\n\nTorsten, your 2nd option didnt work with this error :\n\nERROR: syntax error at or near \"discount\"\nLINE 1: ...rang, keluar, case when discount<=...\n ^\n\nFor Mat : what command i can use to show how big the tables in MB ?\n\nThanks\n\nOn Dec 1, 2013, at 8:06 PM, Torsten Förtsch wrote:\n\n> On 01/12/13 10:07, Hengky Liwandouw wrote:\n>> with qry1 as \n>> (select tanggal, extract(month from tanggal) as bulan, tblsupplier.id, nama, kodebarang, namabarang, keluar, \n>> \tcase when discount<=100 then\n>> \t keluar*(harga -(discount/100*harga))\n>> \twhen tbltransaksi.discount>100 then\n>> \t\tkeluar*(harga-discount)\n>> \tend \n>> as jumlah\n>> from tbltransaksi \n>> join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n>> join tblsupplier on tblproduk.supplierid=tblsupplier.id\n>> where jualid is not null or returjualid is not null\n>> and extract(year from tanggal)='2013')\n>> \n>> select \n>> id, nama, kodebarang, namabarang,\n>> sum(case when bulan = 1 then keluar else 0 end) as Jan,\n>> sum(case when bulan = 2 then keluar else 0 end) as Feb,\n>> sum(case when bulan = 3 then keluar else 0 end) as Maret,\n>> sum(case when bulan = 4 then keluar else 0 end) as April,\n>> sum(case when bulan = 5 then keluar else 0 end) as Mei,\n>> sum(case when bulan = 6 then keluar else 0 end) as Juni,\n>> sum(case when bulan = 7 then keluar else 0 end) as Juli,\n>> sum(case when bulan = 8 then keluar else 0 end) as Agust,\n>> sum(case when bulan = 9 then keluar else 0 end) as Sept,\n>> sum(case when bulan = 10 then keluar else 0 end) as Okt,\n>> sum(case when bulan = 11 then keluar else 0 end) as Nov,\n>> sum(case when bulan = 12 then keluar else 0 end) as Des,\n>> sum(coalesce(keluar,0)) as total\n>> from qry1\n>> group by id, nama, kodebarang, namabarang\n>> order by total desc\n>> limit 1000\n>> \n>> this is the explain analyse :\n>> \n>> \"Limit (cost=346389.90..346392.40 rows=1000 width=376) (actual time=56765.848..56766.229 rows=1000 loops=1)\"\n>> \" CTE qry1\"\n>> \" -> Hash Join (cost=4444.64..62683.91 rows=766519 width=84) (actual time=87.342..1786.851 rows=737662 loops=1)\"\n>> \" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n>> \" -> Seq Scan on tbltransaksi (cost=0.00..24704.06 rows=766519 width=29) (actual time=0.010..271.147 rows=767225 loops=1)\"\n>> \" Filter: ((jualid IS NOT NULL) OR ((returjualid IS NOT NULL) AND (date_part('year'::text, (tanggal)::timestamp without time zone) = 2013::double precision)))\"\n>> \" Rows Removed by Filter: 37441\"\n>> \" -> Hash (cost=3380.52..3380.52 rows=85130 width=68) (actual time=87.265..87.265 rows=65219 loops=1)\"\n>> \" Buckets: 16384 Batches: 1 Memory Usage: 5855kB\"\n>> \" -> Hash Join (cost=21.68..3380.52 rows=85130 width=68) (actual time=0.748..59.469 rows=65219 loops=1)\"\n>> \" Hash Cond: ((tblproduk.supplierid)::text = (tblsupplier.id)::text)\"\n>> \" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=51) (actual time=0.005..17.184 rows=85034 loops=1)\"\n>> \" -> Hash (cost=14.08..14.08 rows=608 width=26) (actual time=0.730..0.730 rows=609 loops=1)\"\n>> \" Buckets: 1024 Batches: 1 Memory Usage: 28kB\"\n>> \" -> Seq Scan on tblsupplier (cost=0.00..14.08 rows=608 width=26) (actual time=0.006..0.298 rows=609 loops=1)\"\n>> \" -> Sort (cost=283705.99..283897.62 rows=76652 width=376) (actual time=56765.846..56766.006 rows=1000 loops=1)\"\n>> \" Sort Key: (sum(COALESCE(qry1.keluar, 0::numeric)))\"\n>> \" Sort Method: top-N heapsort Memory: 280kB\"\n>> \" -> GroupAggregate (cost=221247.80..279503.25 rows=76652 width=376) (actual time=50731.735..56739.181 rows=23630 loops=1)\"\n>> \" -> Sort (cost=221247.80..223164.10 rows=766519 width=376) (actual time=50731.687..54455.528 rows=737662 loops=1)\"\n>> \" Sort Key: qry1.id, qry1.nama, qry1.kodebarang, qry1.namabarang\"\n>> \" Sort Method: external merge Disk: 71872kB\"\n>> \" -> CTE Scan on qry1 (cost=0.00..15330.38 rows=766519 width=376) (actual time=87.346..2577.066 rows=737662 loops=1)\"\n>> \"Total runtime: 56787.136 ms\"\n> \n> I'd try 2 things:\n> \n> 1) set work_mem to ~100Mb. You don't have to do that globally in\n> postgresql.conf. You can set it for the current session only.\n> \n> set work_mem to '100MB';\n> \n> Then run your query.\n> \n> 2) change the common table expression to a subquery:\n> \n> select\n> id, nama, kodebarang, namabarang,\n> sum(case when bulan = 1 then keluar else 0 end) as Jan,\n> sum(case when bulan = 2 then keluar else 0 end) as Feb,\n> sum(case when bulan = 3 then keluar else 0 end) as Maret,\n> sum(case when bulan = 4 then keluar else 0 end) as April,\n> sum(case when bulan = 5 then keluar else 0 end) as Mei,\n> sum(case when bulan = 6 then keluar else 0 end) as Juni,\n> sum(case when bulan = 7 then keluar else 0 end) as Juli,\n> sum(case when bulan = 8 then keluar else 0 end) as Agust,\n> sum(case when bulan = 9 then keluar else 0 end) as Sept,\n> sum(case when bulan = 10 then keluar else 0 end) as Okt,\n> sum(case when bulan = 11 then keluar else 0 end) as Nov,\n> sum(case when bulan = 12 then keluar else 0 end) as Des,\n> sum(coalesce(keluar,0)) as total\n> from (select tanggal, extract(month from tanggal) as bulan,\n> tblsupplier.id, nama, kodebarang, namabarang, keluar,\n> case\n> when discount<=100 then\n> keluar*(harga -(discount/100*harga))\n> when tbltransaksi.discount>100 then\n> keluar*(harga-discount)\n> end as jumlah\n> from tbltransaksi\n> join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n> join tblsupplier on tblproduk.supplierid=tblsupplier.id\n> where jualid is not null or returjualid is not null\n> and extract(year from tanggal)='2013') qry1\n> group by id, nama, kodebarang, namabarang\n> order by total desc\n> limit 1000\n> \n> Selamat berjaya,\n> Torsten\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 20:25:38 +0800", "msg_from": "Hengky Liwandouw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Torsten F�rtsch <[email protected]> wrote:\n> I'd try 2 things:\n> \n> 1) set work_mem to ~100Mb. You don't have to do that globally in\n> postgresql.conf. You can set it for the current session only.\n> \n> set work_mem to '100MB';\n> \n> Then run your query.\n> \n> 2) change the common table expression to a subquery:\n\nYeah, agree.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 13:27:02 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Hengky Liwandouw <[email protected]> wrote:\n\n> \n> For Mat : what command i can use to show how big the tables in MB ?\n\nhttp://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 13:29:47 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Thanks a lot Andreas :)\n\nThe tbltransaksi size = 263MB\nTotal database size = 1277MB\n\nQuite small for so many records store in it.\n\nThis group really helpfull. \n\nOn Dec 1, 2013, at 8:29 PM, Andreas Kretschmer wrote:\n\n> Hengky Liwandouw <[email protected]> wrote:\n> \n>> \n>> For Mat : what command i can use to show how big the tables in MB ?\n> \n> http://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html\n> \n> \n> Andreas\n> -- \n> Really, I'm not out to destroy Microsoft. That will just be a completely\n> unintentional side effect. (Linus Torvalds)\n> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 22:11:40 +0800", "msg_from": "Hengky Liwandouw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Hengky Liwandouw <[email protected]> wrote:\n\n> where jualid is not null \n> and extract(year from tanggal)='2013')\n\n> But the problem is : when i change the where clause to :\n>\n> where jualid is not null or returjualid is not null\n> and extract(year from tanggal)='2013')\n>\n> (there is additional or returjualid is not null,) the query needs\n> 56 second to display the result.\n\nBefore worrying about the run time, I would worry about whether you\nare getting the results you expect.  That will be interpreted as:\n\nwhere jualid is not null\n   or (returjualid is not null and extract(year from tanggal) = '2013')\n\n... not:\n\nwhere (jualid is not null or returjualid is not null)\n  and extract(year from tanggal) = '2013'\nAND has higher priority than OR; so if you want to limit by year\nfrom tanggal even when jualid is not null, you must use\nparentheses.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 1 Dec 2013 08:02:05 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Thanks Kevin. You are absolutely right. I should use parentheses, it is\nwhat i want for the query to do.\n\nIt also increasing processing time to 5.444 seconds. Should be okay i think.\n\n\nOn Sun, Dec 1, 2013 at 11:02 PM, Kevin Grittner <[email protected]> wrote:\n\n> Hengky Liwandouw <[email protected]> wrote:\n>\n> > where jualid is not null\n> > and extract(year from tanggal)='2013')\n>\n> > But the problem is : when i change the where clause to :\n> >\n> > where jualid is not null or returjualid is not null\n> > and extract(year from tanggal)='2013')\n> >\n> > (there is additional or returjualid is not null,) the query needs\n> > 56 second to display the result.\n>\n> Before worrying about the run time, I would worry about whether you\n> are getting the results you expect. That will be interpreted as:\n>\n> where jualid is not null\n> or (returjualid is not null and extract(year from tanggal) = '2013')\n>\n> ... not:\n>\n> where (jualid is not null or returjualid is not null)\n> and extract(year from tanggal) = '2013'\n> AND has higher priority than OR; so if you want to limit by year\n> from tanggal even when jualid is not null, you must use\n> parentheses.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThanks Kevin. You are absolutely right. I should use parentheses, it is what i want for the query to do.It also increasing processing time to 5.444 seconds. Should be okay i think.\nOn Sun, Dec 1, 2013 at 11:02 PM, Kevin Grittner <[email protected]> wrote:\nHengky Liwandouw <[email protected]> wrote:\n\n> where jualid is not null\n> and extract(year from tanggal)='2013')\n\n> But the problem is : when i change the where clause to :\n>\n> where jualid is not null or returjualid is not null\n> and extract(year from tanggal)='2013')\n>\n> (there is additional or returjualid is not null,) the query needs\n> 56 second to display the result.\n\nBefore worrying about the run time, I would worry about whether you\nare getting the results you expect.  That will be interpreted as:\n\nwhere jualid is not null\n   or (returjualid is not null and extract(year from tanggal) = '2013')\n\n... not:\n\nwhere (jualid is not null or returjualid is not null)\n  and extract(year from tanggal) = '2013'\nAND has higher priority than OR; so if you want to limit by year\nfrom tanggal even when jualid is not null, you must use\nparentheses.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 1 Dec 2013 23:14:38 +0700", "msg_from": "Hengky Lie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "Dear Torsten and friends,\n\nThis is another good case to analyse why the query performance is not the\nsame :\n\nThere are 2 query :\n(1)\n\nwith qry1 as (\nselect subkategori, kodebarang as produkid, namabarang, keluar,\ntbltransaksi.modal*keluar as ttlmodal,\n case\n when tbltransaksi.discount<=100 then\n keluar*(harga - (discount/100*harga))\n when tbltransaksi.discount>100\n then keluar*(harga-discount)\n end as jumlah\n from tblpenjualan\n join tbltransaksi on tblpenjualan.id=tbltransaksi.jualid\n join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n join tblsubkategori on\ntblproduk.subkategoriid=tblsubkategori.tblsubkategoriid\n join tblkategori on tblkategori.kategoriid=tblsubkategori.kategoriid\n where tblpenjualan.tanggal between '01/01/13' and '31/10/13')\n\n\nselect subkategori,produkid, namabarang , sum(keluar) as ttlkeluar,\nsum(jumlah) as jumlah, sum(ttlmodal) as ttlmodal\nfrom qry1\ngroup by subkategori, produkid, namabarang\n\n\"QUERY PLAN\"\n\"HashAggregate (cost=99124.61..99780.94 rows=65633 width=334) (actual\ntime=3422.786..3434.511 rows=24198 loops=1)\"\n\" Buffers: shared hit=14543\"\n\" CTE qry1\"\n\" -> Hash Join (cost=11676.07..76153.06 rows=656330 width=73)\n(actual time=181.683..2028.046 rows=657785 loops=1)\"\n\" Hash Cond: ((tbltransaksi.kodebarang)::text =\n(tblproduk.produkid)::text)\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=7247.75..44651.13 rows=656330\nwidth=31) (actual time=84.885..787.029 rows=658438 loops=1)\"\n\" Hash Cond: (tbltransaksi.jualid = tblpenjualan.id)\"\n\" Buffers: shared hit=13204\"\n\" -> Seq Scan on tbltransaksi (cost=0.00..18730.83\nrows=807283 width=35) (actual time=0.005..157.004 rows=807033\nloops=1)\"\n\" Buffers: shared hit=10658\"\n\" -> Hash (cost=5293.64..5293.64 rows=156329 width=4)\n(actual time=84.842..84.842 rows=154900 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 3631kB\"\n\" Buffers: shared hit=2546\"\n\" -> Seq Scan on tblpenjualan\n(cost=0.00..5293.64 rows=156329 width=4) (actual time=0.007..49.444\nrows=154900 loops=1)\"\n\" Filter: ((tanggal >= '2013-01-01'::date)\nAND (tanggal <= '2013-10-31'::date))\"\n\" Rows Removed by Filter: 27928\"\n\" Buffers: shared hit=2546\"\n\" -> Hash (cost=3364.19..3364.19 rows=85130 width=55)\n(actual time=96.736..96.736 rows=84701 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 6323kB\"\n\" Buffers: shared hit=1339\"\n\" -> Hash Join (cost=5.35..3364.19 rows=85130\nwidth=55) (actual time=0.241..62.038 rows=84701 loops=1)\"\n\" Hash Cond: ((tblproduk.subkategoriid)::text =\n(tblsubkategori.tblsubkategoriid)::text)\"\n\" Buffers: shared hit=1339\"\n\" -> Seq Scan on tblproduk (cost=0.00..2188.30\nrows=85130 width=45) (actual time=0.008..17.549 rows=85035 loops=1)\"\n\" Buffers: shared hit=1337\"\n\" -> Hash (cost=4.23..4.23 rows=90 width=17)\n(actual time=0.224..0.224 rows=90 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n\" Buffers: shared hit=2\"\n\" -> Hash Join (cost=1.09..4.23 rows=90\nwidth=17) (actual time=0.028..0.153 rows=90 loops=1)\"\n\" Hash Cond:\n((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)\"\n\" Buffers: shared hit=2\"\n\" -> Seq Scan on tblsubkategori\n(cost=0.00..1.90 rows=90 width=21) (actual time=0.005..0.029 rows=90\nloops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.04..1.04 rows=4\nwidth=4) (actual time=0.011..0.011 rows=4 loops=1)\"\n\" Buckets: 1024 Batches: 1\nMemory Usage: 1kB\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on tblkategori\n(cost=0.00..1.04 rows=4 width=4) (actual time=0.002..0.004 rows=4\nloops=1)\"\n\" Buffers: shared hit=1\"\n\" -> CTE Scan on qry1 (cost=0.00..13126.60 rows=656330 width=334)\n(actual time=181.687..2556.526 rows=657785 loops=1)\"\n\" Buffers: shared hit=14543\"\n\"Total runtime: 3454.442 ms\"\n\n(2)this is exactly the same query with no.1 except it uses subquery\n\n\tselect subkategori,produkid, namabarang , sum(keluar) as ttlkeluar,\nsum(jumlah) as jumlah, sum(ttlmodal) as ttlmodal from\n\t( select subkategori, kodebarang as produkid, namabarang, keluar,\ntbltransaksi.modal*keluar as ttlmodal,\n\tcase\n\twhen tbltransaksi.discount<=100 then\n\t\tkeluar*(harga - (discount/100*harga))\n\t\twhen tbltransaksi.discount>100\n\t\t\tthen keluar*(harga-discount)\n\t\tend as jumlah\n\t\tfrom tblpenjualan\n\t\tjoin tbltransaksi on tblpenjualan.id=tbltransaksi.jualid\n\t\tjoin tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n\t\tjoin tblsubkategori on tblproduk.subkategoriid=tblsubkategori.tblsubkategoriid\n\t\tjoin tblkategori on tblkategori.kategoriid=tblsubkategori.kategoriid\n\t\twhere tblpenjualan.tanggal between '01/01/13' and '31/10/13')\n\t\tas dt group by subkategori, produkid, namabarang\n\nThe analyse result :\n\n\"QUERY PLAN\"\n\"GroupAggregate (cost=124800.44..157616.94 rows=656330 width=73)\n(actual time=13895.782..15236.212 rows=24198 loops=1)\"\n\" Buffers: shared hit=14543\"\n\" -> Sort (cost=124800.44..126441.26 rows=656330 width=73) (actual\ntime=13895.750..14024.911 rows=657785 loops=1)\"\n\" Sort Key: tblsubkategori.subkategori,\ntbltransaksi.kodebarang, tblproduk.namabarang\"\n\" Sort Method: quicksort Memory: 103431kB\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=11676.07..61385.63 rows=656330 width=73)\n(actual time=177.521..1264.431 rows=657785 loops=1)\"\n\" Hash Cond: ((tbltransaksi.kodebarang)::text =\n(tblproduk.produkid)::text)\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=7247.75..44651.13 rows=656330\nwidth=31) (actual time=84.473..739.064 rows=658438 loops=1)\"\n\" Hash Cond: (tbltransaksi.jualid = tblpenjualan.id)\"\n\" Buffers: shared hit=13204\"\n\" -> Seq Scan on tbltransaksi\n(cost=0.00..18730.83 rows=807283 width=35) (actual time=0.005..146.601\nrows=807033 loops=1)\"\n\" Buffers: shared hit=10658\"\n\" -> Hash (cost=5293.64..5293.64 rows=156329\nwidth=4) (actual time=84.429..84.429 rows=154900 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 3631kB\"\n\" Buffers: shared hit=2546\"\n\" -> Seq Scan on tblpenjualan\n(cost=0.00..5293.64 rows=156329 width=4) (actual time=0.008..48.968\nrows=154900 loops=1)\"\n\" Filter: ((tanggal >=\n'2013-01-01'::date) AND (tanggal <= '2013-10-31'::date))\"\n\" Rows Removed by Filter: 27928\"\n\" Buffers: shared hit=2546\"\n\" -> Hash (cost=3364.19..3364.19 rows=85130 width=55)\n(actual time=92.998..92.998 rows=84701 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 6323kB\"\n\" Buffers: shared hit=1339\"\n\" -> Hash Join (cost=5.35..3364.19 rows=85130\nwidth=55) (actual time=0.240..59.587 rows=84701 loops=1)\"\n\" Hash Cond: ((tblproduk.subkategoriid)::text\n= (tblsubkategori.tblsubkategoriid)::text)\"\n\" Buffers: shared hit=1339\"\n\" -> Seq Scan on tblproduk\n(cost=0.00..2188.30 rows=85130 width=45) (actual time=0.008..16.942\nrows=85035 loops=1)\"\n\" Buffers: shared hit=1337\"\n\" -> Hash (cost=4.23..4.23 rows=90\nwidth=17) (actual time=0.221..0.221 rows=90 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n\" Buffers: shared hit=2\"\n\" -> Hash Join (cost=1.09..4.23\nrows=90 width=17) (actual time=0.028..0.142 rows=90 loops=1)\"\n\" Hash Cond:\n((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)\"\n\" Buffers: shared hit=2\"\n\" -> Seq Scan on tblsubkategori\n(cost=0.00..1.90 rows=90 width=21) (actual time=0.006..0.046 rows=90\nloops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.04..1.04\nrows=4 width=4) (actual time=0.012..0.012 rows=4 loops=1)\"\n\" Buckets: 1024 Batches: 1\n Memory Usage: 1kB\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on\ntblkategori (cost=0.00..1.04 rows=4 width=4) (actual\ntime=0.002..0.005 rows=4 loops=1)\"\n\" Buffers: shared hit=1\"\n\"Total runtime: 15244.038 ms\"\n\nThis is my Postgresqlconf :\nmax_connections=50\nshared_buffers=1024MB\nwall_buffers=16MB\nmax_prepared_transactions=0\nwork_mem=50MB\nmaintenance_work_mem=256MB\n\nThanks\n\n\nOn Sun, Dec 1, 2013 at 9:39 PM, Torsten Förtsch <[email protected]>wrote:\n\n> On 01/12/13 13:40, Hengky Liwandouw wrote:\n> > Torsten, your 2nd option works now. I dont know maybe copy and paste\n> error. I just want to report that your 2nd option with work_mem=100MB\n> required the same amount of time (about 58 seconds), while my query\n> required 4.9 seconds.\n> >\n> > What make this two query so different ?\n> >\n> Without the \"explain (analyze,buffers) ...\" it's hard to say. A CTE is\n> currently a way to trick the query planner because it's planned\n> separately. A subquery on the other hand is integrated in the outer\n> query and planned/optimized as one thing.\n>\n> If your planner parameters are correctly set up, the subquery should\n> almost always outrun the CTE. Often, though, not much.\n>\n> Now, you may ask why CTE then exist at all? There are things that cannot\n> be expressed without them, in particular WITH RECURSIVE.\n>\n> The fact that it performs so badly as a subquery indicates that either\n> your table statistics are suboptimal or more probably the planner\n> parameters or work_mem.\n>\n> Another point I have just noticed, how does it perform if you change\n>\n> and extract(... from tanggal)='2013'\n>\n> to\n>\n> and '2013-01-01'::date <= tanggal\n> and tanggal < '2013-01-01'::date + '1 year'::interval\n>\n> Also, I think it would be possible to even get rid of the subquery. At\n> least you can get rid of the tanggal and jumlah output from the subquery.\n>\n> select s.id, s.nama, t.kodebarang, p.namabarang,\n> sum(case when extract(month from t.tanggal) = 1\n> then t.keluar else 0 end) as jan,\n> sum(case when extract(month from t.tanggal) = 2\n> then t.keluar else 0 end) as feb,\n> ...,\n> sum(t.keluar) as total\n> from tbltransaksi t\n> join tblproduk p on t.kodebarang=p.produkid\n> join tblsupplier s on p.supplierid=s.id\n> where (t.jualid is not null or t.returjualid is not null)\n> and '2013-01-01'::date <= t.tanggal\n> and t.tanggal < '2013-01-01'::date + '1 year'::interval\n> group by s.id, s.nama, t.kodebarang, p.namabarang\n> order by total desc\n> limit 1000\n>\n> would be interesting to see the \"explain (analyze,buffers)\" output for\n> the query above.\n>\n> Please double-check the query. I think it should do exactly the same as\n> your query. But you know, shit happens.\n>\n> BTW, am I right in assuming that you are from Malaysia or Indonesia? I\n> am trying to learn a bit of Malay. I am a complete beginner, though.\n>\n> Selamat berjaya (is that possible to wish you success?)\n> Torsten\n>\n\nDear Torsten and friends,This is another good case to analyse why the query performance is not  the same :There are 2 query :(1)with qry1 as (select subkategori, kodebarang as produkid, namabarang, keluar, tbltransaksi.modal*keluar as ttlmodal,\n    case     when tbltransaksi.discount<=100 then        keluar*(harga - (discount/100*harga))        when tbltransaksi.discount>100             then keluar*(harga-discount)        end as jumlah\n        from tblpenjualan        join tbltransaksi on tblpenjualan.id=tbltransaksi.jualid        join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid        join tblsubkategori on tblproduk.subkategoriid=tblsubkategori.tblsubkategoriid\n        join tblkategori on tblkategori.kategoriid=tblsubkategori.kategoriid        where tblpenjualan.tanggal between '01/01/13' and '31/10/13')select subkategori,produkid, namabarang , sum(keluar) as ttlkeluar, sum(jumlah) as jumlah, sum(ttlmodal) as ttlmodal \nfrom qry1group by subkategori, produkid, namabarang\"QUERY PLAN\"\n\"HashAggregate (cost=99124.61..99780.94 rows=65633 width=334) (actual time=3422.786..3434.511 rows=24198 loops=1)\"\n\" Buffers: shared hit=14543\"\n\" CTE qry1\"\n\" -> Hash Join (cost=11676.07..76153.06 rows=656330 width=73) (actual time=181.683..2028.046 rows=657785 loops=1)\"\n\" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=7247.75..44651.13 rows=656330 width=31) (actual time=84.885..787.029 rows=658438 loops=1)\"\n\" Hash Cond: (tbltransaksi.jualid = tblpenjualan.id)\"\n\" Buffers: shared hit=13204\"\n\" -> Seq Scan on tbltransaksi (cost=0.00..18730.83 rows=807283 width=35) (actual time=0.005..157.004 rows=807033 loops=1)\"\n\" Buffers: shared hit=10658\"\n\" -> Hash (cost=5293.64..5293.64 rows=156329 width=4) (actual time=84.842..84.842 rows=154900 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 3631kB\"\n\" Buffers: shared hit=2546\"\n\" -> Seq Scan on tblpenjualan (cost=0.00..5293.64 rows=156329 width=4) (actual time=0.007..49.444 rows=154900 loops=1)\"\n\" Filter: ((tanggal >= '2013-01-01'::date) AND (tanggal <= '2013-10-31'::date))\"\n\" Rows Removed by Filter: 27928\"\n\" Buffers: shared hit=2546\"\n\" -> Hash (cost=3364.19..3364.19 rows=85130 width=55) (actual time=96.736..96.736 rows=84701 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 6323kB\"\n\" Buffers: shared hit=1339\"\n\" -> Hash Join (cost=5.35..3364.19 rows=85130 width=55) (actual time=0.241..62.038 rows=84701 loops=1)\"\n\" Hash Cond: ((tblproduk.subkategoriid)::text = (tblsubkategori.tblsubkategoriid)::text)\"\n\" Buffers: shared hit=1339\"\n\" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=45) (actual time=0.008..17.549 rows=85035 loops=1)\"\n\" Buffers: shared hit=1337\"\n\" -> Hash (cost=4.23..4.23 rows=90 width=17) (actual time=0.224..0.224 rows=90 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n\" Buffers: shared hit=2\"\n\" -> Hash Join (cost=1.09..4.23 rows=90 width=17) (actual time=0.028..0.153 rows=90 loops=1)\"\n\" Hash Cond: ((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)\"\n\" Buffers: shared hit=2\"\n\" -> Seq Scan on tblsubkategori (cost=0.00..1.90 rows=90 width=21) (actual time=0.005..0.029 rows=90 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.04..1.04 rows=4 width=4) (actual time=0.011..0.011 rows=4 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on tblkategori (cost=0.00..1.04 rows=4 width=4) (actual time=0.002..0.004 rows=4 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> CTE Scan on qry1 (cost=0.00..13126.60 rows=656330 width=334) (actual time=181.687..2556.526 rows=657785 loops=1)\"\n\" Buffers: shared hit=14543\"\n\"Total runtime: 3454.442 ms\"(2)this is exactly the same query with no.1 except it uses subquery\tselect subkategori,produkid, namabarang , sum(keluar) as ttlkeluar, sum(jumlah) as jumlah, sum(ttlmodal) as ttlmodal from\n\t( select subkategori, kodebarang as produkid, namabarang, keluar, tbltransaksi.modal*keluar as ttlmodal,\tcase \twhen tbltransaksi.discount<=100 then\t\tkeluar*(harga - (discount/100*harga))\t\twhen tbltransaksi.discount>100 \n\t\t\tthen keluar*(harga-discount)\t\tend as jumlah\t\tfrom tblpenjualan\t\tjoin tbltransaksi on tblpenjualan.id=tbltransaksi.jualid\t\tjoin tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n\t\tjoin tblsubkategori on tblproduk.subkategoriid=tblsubkategori.tblsubkategoriid\t\tjoin tblkategori on tblkategori.kategoriid=tblsubkategori.kategoriid\t\twhere tblpenjualan.tanggal between '01/01/13' and '31/10/13')\n\t\tas dt group by subkategori, produkid, namabarangThe analyse result :\"QUERY PLAN\"\n\"GroupAggregate (cost=124800.44..157616.94 rows=656330 width=73) (actual time=13895.782..15236.212 rows=24198 loops=1)\"\n\" Buffers: shared hit=14543\"\n\" -> Sort (cost=124800.44..126441.26 rows=656330 width=73) (actual time=13895.750..14024.911 rows=657785 loops=1)\"\n\" Sort Key: tblsubkategori.subkategori, tbltransaksi.kodebarang, tblproduk.namabarang\"\n\" Sort Method: quicksort Memory: 103431kB\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=11676.07..61385.63 rows=656330 width=73) (actual time=177.521..1264.431 rows=657785 loops=1)\"\n\" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=7247.75..44651.13 rows=656330 width=31) (actual time=84.473..739.064 rows=658438 loops=1)\"\n\" Hash Cond: (tbltransaksi.jualid = tblpenjualan.id)\"\n\" Buffers: shared hit=13204\"\n\" -> Seq Scan on tbltransaksi (cost=0.00..18730.83 rows=807283 width=35) (actual time=0.005..146.601 rows=807033 loops=1)\"\n\" Buffers: shared hit=10658\"\n\" -> Hash (cost=5293.64..5293.64 rows=156329 width=4) (actual time=84.429..84.429 rows=154900 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 3631kB\"\n\" Buffers: shared hit=2546\"\n\" -> Seq Scan on tblpenjualan (cost=0.00..5293.64 rows=156329 width=4) (actual time=0.008..48.968 rows=154900 loops=1)\"\n\" Filter: ((tanggal >= '2013-01-01'::date) AND (tanggal <= '2013-10-31'::date))\"\n\" Rows Removed by Filter: 27928\"\n\" Buffers: shared hit=2546\"\n\" -> Hash (cost=3364.19..3364.19 rows=85130 width=55) (actual time=92.998..92.998 rows=84701 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 6323kB\"\n\" Buffers: shared hit=1339\"\n\" -> Hash Join (cost=5.35..3364.19 rows=85130 width=55) (actual time=0.240..59.587 rows=84701 loops=1)\"\n\" Hash Cond: ((tblproduk.subkategoriid)::text = (tblsubkategori.tblsubkategoriid)::text)\"\n\" Buffers: shared hit=1339\"\n\" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=45) (actual time=0.008..16.942 rows=85035 loops=1)\"\n\" Buffers: shared hit=1337\"\n\" -> Hash (cost=4.23..4.23 rows=90 width=17) (actual time=0.221..0.221 rows=90 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n\" Buffers: shared hit=2\"\n\" -> Hash Join (cost=1.09..4.23 rows=90 width=17) (actual time=0.028..0.142 rows=90 loops=1)\"\n\" Hash Cond: ((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)\"\n\" Buffers: shared hit=2\"\n\" -> Seq Scan on tblsubkategori (cost=0.00..1.90 rows=90 width=21) (actual time=0.006..0.046 rows=90 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.04..1.04 rows=4 width=4) (actual time=0.012..0.012 rows=4 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on tblkategori (cost=0.00..1.04 rows=4 width=4) (actual time=0.002..0.005 rows=4 loops=1)\"\n\" Buffers: shared hit=1\"\n\"Total runtime: 15244.038 ms\"This is my  Postgresqlconf :max_connections=50shared_buffers=1024MBwall_buffers=16MBmax_prepared_transactions=0work_mem=50MB\nmaintenance_work_mem=256MBThanksOn Sun, Dec 1, 2013 at 9:39 PM, Torsten Förtsch <[email protected]> wrote:\nOn 01/12/13 13:40, Hengky Liwandouw wrote:\n> Torsten, your 2nd option works now. I dont know maybe copy and paste error. I just want to report that your 2nd option with work_mem=100MB required the same amount of time (about 58 seconds), while my query required 4.9 seconds.\n\n>\n> What make this two query so different ?\n>\nWithout the \"explain (analyze,buffers) ...\" it's hard to say. A CTE is\ncurrently a way to trick the query planner because it's planned\nseparately. A subquery on the other hand is integrated in the outer\nquery and planned/optimized as one thing.\n\nIf your planner parameters are correctly set up, the subquery should\nalmost always outrun the CTE. Often, though, not much.\n\nNow, you may ask why CTE then exist at all? There are things that cannot\nbe expressed without them, in particular WITH RECURSIVE.\n\nThe fact that it performs so badly as a subquery indicates that either\nyour table statistics are suboptimal or more probably the planner\nparameters or work_mem.\n\nAnother point I have just noticed, how does it perform if you change\n\n  and extract(... from tanggal)='2013'\n\nto\n\n  and '2013-01-01'::date <= tanggal\n  and tanggal < '2013-01-01'::date + '1 year'::interval\n\nAlso, I think it would be possible to even get rid of the subquery. At\nleast you can get rid of the tanggal and jumlah output from the subquery.\n\nselect s.id, s.nama, t.kodebarang, p.namabarang,\n       sum(case when extract(month from t.tanggal) = 1\n                then t.keluar else 0 end) as jan,\n       sum(case when extract(month from t.tanggal) = 2\n                then t.keluar else 0 end) as feb,\n       ...,\n       sum(t.keluar) as total\n  from tbltransaksi t\n  join tblproduk p on t.kodebarang=p.produkid\n  join tblsupplier s on p.supplierid=s.id\n where (t.jualid is not null or t.returjualid is not null)\n   and '2013-01-01'::date <= t.tanggal\n   and t.tanggal < '2013-01-01'::date + '1 year'::interval\n group by s.id, s.nama, t.kodebarang, p.namabarang\n order by total desc\n limit 1000\n\nwould be interesting to see the \"explain (analyze,buffers)\" output for\nthe query above.\n\nPlease double-check the query. I think it should do exactly the same as\nyour query. But you know, shit happens.\n\nBTW, am I right in assuming that you are from Malaysia or Indonesia? I\nam trying to learn a bit of Malay. I am a complete beginner, though.\n\nSelamat berjaya      (is that possible to wish you success?)\nTorsten", "msg_date": "Mon, 2 Dec 2013 00:33:03 +0700", "msg_from": "Hengky Lie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" }, { "msg_contents": "sorry, for now, work_mem=100MB\n\n\nOn Mon, Dec 2, 2013 at 12:33 AM, Hengky Lie <[email protected]>wrote:\n\n> Dear Torsten and friends,\n>\n> This is another good case to analyse why the query performance is not the\n> same :\n>\n> There are 2 query :\n> (1)\n>\n> with qry1 as (\n> select subkategori, kodebarang as produkid, namabarang, keluar,\n> tbltransaksi.modal*keluar as ttlmodal,\n> case\n> when tbltransaksi.discount<=100 then\n> keluar*(harga - (discount/100*harga))\n>\n> when tbltransaksi.discount>100\n> then keluar*(harga-discount)\n> end as jumlah\n> from tblpenjualan\n> join tbltransaksi on tblpenjualan.id=tbltransaksi.jualid\n> join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n> join tblsubkategori on\n> tblproduk.subkategoriid=tblsubkategori.tblsubkategoriid\n> join tblkategori on\n> tblkategori.kategoriid=tblsubkategori.kategoriid\n> where tblpenjualan.tanggal between '01/01/13' and '31/10/13')\n>\n>\n> select subkategori,produkid, namabarang , sum(keluar) as ttlkeluar,\n> sum(jumlah) as jumlah, sum(ttlmodal) as ttlmodal\n> from qry1\n> group by subkategori, produkid, namabarang\n>\n> \"QUERY PLAN\"\n> \"HashAggregate (cost=99124.61..99780.94 rows=65633 width=334) (actual time=3422.786..3434.511 rows=24198 loops=1)\"\n> \" Buffers: shared hit=14543\"\n> \" CTE qry1\"\n> \" -> Hash Join (cost=11676.07..76153.06 rows=656330 width=73) (actual time=181.683..2028.046 rows=657785 loops=1)\"\n> \" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n> \" Buffers: shared hit=14543\"\n> \" -> Hash Join (cost=7247.75..44651.13 rows=656330 width=31) (actual time=84.885..787.029 rows=658438 loops=1)\"\n> \" Hash Cond: (tbltransaksi.jualid = tblpenjualan.id)\"\n> \" Buffers: shared hit=13204\"\n> \" -> Seq Scan on tbltransaksi (cost=0.00..18730.83 rows=807283 width=35) (actual time=0.005..157.004 rows=807033 loops=1)\"\n> \" Buffers: shared hit=10658\"\n> \" -> Hash (cost=5293.64..5293.64 rows=156329 width=4) (actual time=84.842..84.842 rows=154900 loops=1)\"\n> \" Buckets: 16384 Batches: 1 Memory Usage: 3631kB\"\n> \" Buffers: shared hit=2546\"\n> \" -> Seq Scan on tblpenjualan (cost=0.00..5293.64 rows=156329 width=4) (actual time=0.007..49.444 rows=154900 loops=1)\"\n> \" Filter: ((tanggal >= '2013-01-01'::date) AND (tanggal <= '2013-10-31'::date))\"\n> \" Rows Removed by Filter: 27928\"\n> \" Buffers: shared hit=2546\"\n> \" -> Hash (cost=3364.19..3364.19 rows=85130 width=55) (actual time=96.736..96.736 rows=84701 loops=1)\"\n> \" Buckets: 16384 Batches: 1 Memory Usage: 6323kB\"\n> \" Buffers: shared hit=1339\"\n> \" -> Hash Join (cost=5.35..3364.19 rows=85130 width=55) (actual time=0.241..62.038 rows=84701 loops=1)\"\n> \" Hash Cond: ((tblproduk.subkategoriid)::text = (tblsubkategori.tblsubkategoriid)::text)\"\n> \" Buffers: shared hit=1339\"\n> \" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=45) (actual time=0.008..17.549 rows=85035 loops=1)\"\n> \" Buffers: shared hit=1337\"\n> \" -> Hash (cost=4.23..4.23 rows=90 width=17) (actual time=0.224..0.224 rows=90 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n> \" Buffers: shared hit=2\"\n> \" -> Hash Join (cost=1.09..4.23 rows=90 width=17) (actual time=0.028..0.153 rows=90 loops=1)\"\n> \" Hash Cond: ((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)\"\n> \" Buffers: shared hit=2\"\n> \" -> Seq Scan on tblsubkategori (cost=0.00..1.90 rows=90 width=21) (actual time=0.005..0.029 rows=90 loops=1)\"\n> \" Buffers: shared hit=1\"\n> \" -> Hash (cost=1.04..1.04 rows=4 width=4) (actual time=0.011..0.011 rows=4 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n> \" Buffers: shared hit=1\"\n> \" -> Seq Scan on tblkategori (cost=0.00..1.04 rows=4 width=4) (actual time=0.002..0.004 rows=4 loops=1)\"\n> \" Buffers: shared hit=1\"\n> \" -> CTE Scan on qry1 (cost=0.00..13126.60 rows=656330 width=334) (actual time=181.687..2556.526 rows=657785 loops=1)\"\n> \" Buffers: shared hit=14543\"\n> \"Total runtime: 3454.442 ms\"\n>\n> (2)this is exactly the same query with no.1 except it uses subquery\n>\n> \tselect subkategori,produkid, namabarang , sum(keluar) as ttlkeluar, sum(jumlah) as jumlah, sum(ttlmodal) as ttlmodal from\n>\n> \t( select subkategori, kodebarang as produkid, namabarang, keluar, tbltransaksi.modal*keluar as ttlmodal,\n> \tcase\n> \twhen tbltransaksi.discount<=100 then\n> \t\tkeluar*(harga - (discount/100*harga))\n>\n> \t\twhen tbltransaksi.discount>100\n>\n> \t\t\tthen keluar*(harga-discount)\n> \t\tend as jumlah\n> \t\tfrom tblpenjualan\n> \t\tjoin tbltransaksi on tblpenjualan.id=tbltransaksi.jualid\n> \t\tjoin tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n>\n> \t\tjoin tblsubkategori on tblproduk.subkategoriid=tblsubkategori.tblsubkategoriid\n> \t\tjoin tblkategori on tblkategori.kategoriid=tblsubkategori.kategoriid\n> \t\twhere tblpenjualan.tanggal between '01/01/13' and '31/10/13')\n>\n> \t\tas dt group by subkategori, produkid, namabarang\n>\n> The analyse result :\n>\n> \"QUERY PLAN\"\n> \"GroupAggregate (cost=124800.44..157616.94 rows=656330 width=73) (actual time=13895.782..15236.212 rows=24198 loops=1)\"\n> \" Buffers: shared hit=14543\"\n> \" -> Sort (cost=124800.44..126441.26 rows=656330 width=73) (actual time=13895.750..14024.911 rows=657785 loops=1)\"\n> \" Sort Key: tblsubkategori.subkategori, tbltransaksi.kodebarang, tblproduk.namabarang\"\n> \" Sort Method: quicksort Memory: 103431kB\"\n> \" Buffers: shared hit=14543\"\n> \" -> Hash Join (cost=11676.07..61385.63 rows=656330 width=73) (actual time=177.521..1264.431 rows=657785 loops=1)\"\n> \" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n> \" Buffers: shared hit=14543\"\n> \" -> Hash Join (cost=7247.75..44651.13 rows=656330 width=31) (actual time=84.473..739.064 rows=658438 loops=1)\"\n> \" Hash Cond: (tbltransaksi.jualid = tblpenjualan.id)\"\n> \" Buffers: shared hit=13204\"\n> \" -> Seq Scan on tbltransaksi (cost=0.00..18730.83 rows=807283 width=35) (actual time=0.005..146.601 rows=807033 loops=1)\"\n> \" Buffers: shared hit=10658\"\n> \" -> Hash (cost=5293.64..5293.64 rows=156329 width=4) (actual time=84.429..84.429 rows=154900 loops=1)\"\n> \" Buckets: 16384 Batches: 1 Memory Usage: 3631kB\"\n> \" Buffers: shared hit=2546\"\n> \" -> Seq Scan on tblpenjualan (cost=0.00..5293.64 rows=156329 width=4) (actual time=0.008..48.968 rows=154900 loops=1)\"\n> \" Filter: ((tanggal >= '2013-01-01'::date) AND (tanggal <= '2013-10-31'::date))\"\n> \" Rows Removed by Filter: 27928\"\n> \" Buffers: shared hit=2546\"\n> \" -> Hash (cost=3364.19..3364.19 rows=85130 width=55) (actual time=92.998..92.998 rows=84701 loops=1)\"\n> \" Buckets: 16384 Batches: 1 Memory Usage: 6323kB\"\n> \" Buffers: shared hit=1339\"\n> \" -> Hash Join (cost=5.35..3364.19 rows=85130 width=55) (actual time=0.240..59.587 rows=84701 loops=1)\"\n> \" Hash Cond: ((tblproduk.subkategoriid)::text = (tblsubkategori.tblsubkategoriid)::text)\"\n> \" Buffers: shared hit=1339\"\n> \" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=45) (actual time=0.008..16.942 rows=85035 loops=1)\"\n> \" Buffers: shared hit=1337\"\n> \" -> Hash (cost=4.23..4.23 rows=90 width=17) (actual time=0.221..0.221 rows=90 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n> \" Buffers: shared hit=2\"\n> \" -> Hash Join (cost=1.09..4.23 rows=90 width=17) (actual time=0.028..0.142 rows=90 loops=1)\"\n> \" Hash Cond: ((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)\"\n> \" Buffers: shared hit=2\"\n> \" -> Seq Scan on tblsubkategori (cost=0.00..1.90 rows=90 width=21) (actual time=0.006..0.046 rows=90 loops=1)\"\n> \" Buffers: shared hit=1\"\n> \" -> Hash (cost=1.04..1.04 rows=4 width=4) (actual time=0.012..0.012 rows=4 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n> \" Buffers: shared hit=1\"\n> \" -> Seq Scan on tblkategori (cost=0.00..1.04 rows=4 width=4) (actual time=0.002..0.005 rows=4 loops=1)\"\n> \" Buffers: shared hit=1\"\n> \"Total runtime: 15244.038 ms\"\n>\n> This is my Postgresqlconf :\n> max_connections=50\n> shared_buffers=1024MB\n> wall_buffers=16MB\n> max_prepared_transactions=0\n> work_mem=50MB\n> maintenance_work_mem=256MB\n>\n> Thanks\n>\n>\n> On Sun, Dec 1, 2013 at 9:39 PM, Torsten Förtsch <[email protected]>wrote:\n>\n>> On 01/12/13 13:40, Hengky Liwandouw wrote:\n>> > Torsten, your 2nd option works now. I dont know maybe copy and paste\n>> error. I just want to report that your 2nd option with work_mem=100MB\n>> required the same amount of time (about 58 seconds), while my query\n>> required 4.9 seconds.\n>> >\n>> > What make this two query so different ?\n>> >\n>> Without the \"explain (analyze,buffers) ...\" it's hard to say. A CTE is\n>> currently a way to trick the query planner because it's planned\n>> separately. A subquery on the other hand is integrated in the outer\n>> query and planned/optimized as one thing.\n>>\n>> If your planner parameters are correctly set up, the subquery should\n>> almost always outrun the CTE. Often, though, not much.\n>>\n>> Now, you may ask why CTE then exist at all? There are things that cannot\n>> be expressed without them, in particular WITH RECURSIVE.\n>>\n>> The fact that it performs so badly as a subquery indicates that either\n>> your table statistics are suboptimal or more probably the planner\n>> parameters or work_mem.\n>>\n>> Another point I have just noticed, how does it perform if you change\n>>\n>> and extract(... from tanggal)='2013'\n>>\n>> to\n>>\n>> and '2013-01-01'::date <= tanggal\n>> and tanggal < '2013-01-01'::date + '1 year'::interval\n>>\n>> Also, I think it would be possible to even get rid of the subquery. At\n>> least you can get rid of the tanggal and jumlah output from the subquery.\n>>\n>> select s.id, s.nama, t.kodebarang, p.namabarang,\n>> sum(case when extract(month from t.tanggal) = 1\n>> then t.keluar else 0 end) as jan,\n>> sum(case when extract(month from t.tanggal) = 2\n>> then t.keluar else 0 end) as feb,\n>> ...,\n>> sum(t.keluar) as total\n>> from tbltransaksi t\n>> join tblproduk p on t.kodebarang=p.produkid\n>> join tblsupplier s on p.supplierid=s.id\n>> where (t.jualid is not null or t.returjualid is not null)\n>> and '2013-01-01'::date <= t.tanggal\n>> and t.tanggal < '2013-01-01'::date + '1 year'::interval\n>> group by s.id, s.nama, t.kodebarang, p.namabarang\n>> order by total desc\n>> limit 1000\n>>\n>> would be interesting to see the \"explain (analyze,buffers)\" output for\n>> the query above.\n>>\n>> Please double-check the query. I think it should do exactly the same as\n>> your query. But you know, shit happens.\n>>\n>> BTW, am I right in assuming that you are from Malaysia or Indonesia? I\n>> am trying to learn a bit of Malay. I am a complete beginner, though.\n>>\n>> Selamat berjaya (is that possible to wish you success?)\n>> Torsten\n>>\n>\n>\n\nsorry, for now, work_mem=100MBOn Mon, Dec 2, 2013 at 12:33 AM, Hengky Lie <[email protected]> wrote:\nDear Torsten and friends,This is another good case to analyse why the query performance is not  the same :\nThere are 2 query :(1)with qry1 as (select subkategori, kodebarang as produkid, namabarang, keluar, tbltransaksi.modal*keluar as ttlmodal,\n    case     when tbltransaksi.discount<=100 then        keluar*(harga - (discount/100*harga))        when tbltransaksi.discount>100             then keluar*(harga-discount)        end as jumlah\n\n        from tblpenjualan        join tbltransaksi on tblpenjualan.id=tbltransaksi.jualid        join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n        join tblsubkategori on tblproduk.subkategoriid=tblsubkategori.tblsubkategoriid\n        join tblkategori on tblkategori.kategoriid=tblsubkategori.kategoriid        where tblpenjualan.tanggal between '01/01/13' and '31/10/13')select subkategori,produkid, namabarang , sum(keluar) as ttlkeluar, sum(jumlah) as jumlah, sum(ttlmodal) as ttlmodal \n\nfrom qry1group by subkategori, produkid, namabarang\"QUERY PLAN\"\n\"HashAggregate (cost=99124.61..99780.94 rows=65633 width=334) (actual time=3422.786..3434.511 rows=24198 loops=1)\"\n\" Buffers: shared hit=14543\"\n\" CTE qry1\"\n\" -> Hash Join (cost=11676.07..76153.06 rows=656330 width=73) (actual time=181.683..2028.046 rows=657785 loops=1)\"\n\" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=7247.75..44651.13 rows=656330 width=31) (actual time=84.885..787.029 rows=658438 loops=1)\"\n\" Hash Cond: (tbltransaksi.jualid = tblpenjualan.id)\"\n\" Buffers: shared hit=13204\"\n\" -> Seq Scan on tbltransaksi (cost=0.00..18730.83 rows=807283 width=35) (actual time=0.005..157.004 rows=807033 loops=1)\"\n\" Buffers: shared hit=10658\"\n\" -> Hash (cost=5293.64..5293.64 rows=156329 width=4) (actual time=84.842..84.842 rows=154900 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 3631kB\"\n\" Buffers: shared hit=2546\"\n\" -> Seq Scan on tblpenjualan (cost=0.00..5293.64 rows=156329 width=4) (actual time=0.007..49.444 rows=154900 loops=1)\"\n\" Filter: ((tanggal >= '2013-01-01'::date) AND (tanggal <= '2013-10-31'::date))\"\n\" Rows Removed by Filter: 27928\"\n\" Buffers: shared hit=2546\"\n\" -> Hash (cost=3364.19..3364.19 rows=85130 width=55) (actual time=96.736..96.736 rows=84701 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 6323kB\"\n\" Buffers: shared hit=1339\"\n\" -> Hash Join (cost=5.35..3364.19 rows=85130 width=55) (actual time=0.241..62.038 rows=84701 loops=1)\"\n\" Hash Cond: ((tblproduk.subkategoriid)::text = (tblsubkategori.tblsubkategoriid)::text)\"\n\" Buffers: shared hit=1339\"\n\" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=45) (actual time=0.008..17.549 rows=85035 loops=1)\"\n\" Buffers: shared hit=1337\"\n\" -> Hash (cost=4.23..4.23 rows=90 width=17) (actual time=0.224..0.224 rows=90 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n\" Buffers: shared hit=2\"\n\" -> Hash Join (cost=1.09..4.23 rows=90 width=17) (actual time=0.028..0.153 rows=90 loops=1)\"\n\" Hash Cond: ((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)\"\n\" Buffers: shared hit=2\"\n\" -> Seq Scan on tblsubkategori (cost=0.00..1.90 rows=90 width=21) (actual time=0.005..0.029 rows=90 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.04..1.04 rows=4 width=4) (actual time=0.011..0.011 rows=4 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on tblkategori (cost=0.00..1.04 rows=4 width=4) (actual time=0.002..0.004 rows=4 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> CTE Scan on qry1 (cost=0.00..13126.60 rows=656330 width=334) (actual time=181.687..2556.526 rows=657785 loops=1)\"\n\" Buffers: shared hit=14543\"\n\"Total runtime: 3454.442 ms\"(2)this is exactly the same query with no.1 except it uses subquery\tselect subkategori,produkid, namabarang , sum(keluar) as ttlkeluar, sum(jumlah) as jumlah, sum(ttlmodal) as ttlmodal from\n\n\t( select subkategori, kodebarang as produkid, namabarang, keluar, tbltransaksi.modal*keluar as ttlmodal,\tcase \twhen tbltransaksi.discount<=100 then\t\tkeluar*(harga - (discount/100*harga))\n\t\twhen tbltransaksi.discount>100 \n\t\t\tthen keluar*(harga-discount)\t\tend as jumlah\t\tfrom tblpenjualan\t\tjoin tbltransaksi on tblpenjualan.id=tbltransaksi.jualid\t\tjoin tblproduk on tbltransaksi.kodebarang=tblproduk.produkid\n\n\t\tjoin tblsubkategori on tblproduk.subkategoriid=tblsubkategori.tblsubkategoriid\t\tjoin tblkategori on tblkategori.kategoriid=tblsubkategori.kategoriid\t\twhere tblpenjualan.tanggal between '01/01/13' and '31/10/13')\n\n\t\tas dt group by subkategori, produkid, namabarangThe analyse result :\"QUERY PLAN\"\n\"GroupAggregate (cost=124800.44..157616.94 rows=656330 width=73) (actual time=13895.782..15236.212 rows=24198 loops=1)\"\n\" Buffers: shared hit=14543\"\n\" -> Sort (cost=124800.44..126441.26 rows=656330 width=73) (actual time=13895.750..14024.911 rows=657785 loops=1)\"\n\" Sort Key: tblsubkategori.subkategori, tbltransaksi.kodebarang, tblproduk.namabarang\"\n\" Sort Method: quicksort Memory: 103431kB\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=11676.07..61385.63 rows=656330 width=73) (actual time=177.521..1264.431 rows=657785 loops=1)\"\n\" Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)\"\n\" Buffers: shared hit=14543\"\n\" -> Hash Join (cost=7247.75..44651.13 rows=656330 width=31) (actual time=84.473..739.064 rows=658438 loops=1)\"\n\" Hash Cond: (tbltransaksi.jualid = tblpenjualan.id)\"\n\" Buffers: shared hit=13204\"\n\" -> Seq Scan on tbltransaksi (cost=0.00..18730.83 rows=807283 width=35) (actual time=0.005..146.601 rows=807033 loops=1)\"\n\" Buffers: shared hit=10658\"\n\" -> Hash (cost=5293.64..5293.64 rows=156329 width=4) (actual time=84.429..84.429 rows=154900 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 3631kB\"\n\" Buffers: shared hit=2546\"\n\" -> Seq Scan on tblpenjualan (cost=0.00..5293.64 rows=156329 width=4) (actual time=0.008..48.968 rows=154900 loops=1)\"\n\" Filter: ((tanggal >= '2013-01-01'::date) AND (tanggal <= '2013-10-31'::date))\"\n\" Rows Removed by Filter: 27928\"\n\" Buffers: shared hit=2546\"\n\" -> Hash (cost=3364.19..3364.19 rows=85130 width=55) (actual time=92.998..92.998 rows=84701 loops=1)\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 6323kB\"\n\" Buffers: shared hit=1339\"\n\" -> Hash Join (cost=5.35..3364.19 rows=85130 width=55) (actual time=0.240..59.587 rows=84701 loops=1)\"\n\" Hash Cond: ((tblproduk.subkategoriid)::text = (tblsubkategori.tblsubkategoriid)::text)\"\n\" Buffers: shared hit=1339\"\n\" -> Seq Scan on tblproduk (cost=0.00..2188.30 rows=85130 width=45) (actual time=0.008..16.942 rows=85035 loops=1)\"\n\" Buffers: shared hit=1337\"\n\" -> Hash (cost=4.23..4.23 rows=90 width=17) (actual time=0.221..0.221 rows=90 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n\" Buffers: shared hit=2\"\n\" -> Hash Join (cost=1.09..4.23 rows=90 width=17) (actual time=0.028..0.142 rows=90 loops=1)\"\n\" Hash Cond: ((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)\"\n\" Buffers: shared hit=2\"\n\" -> Seq Scan on tblsubkategori (cost=0.00..1.90 rows=90 width=21) (actual time=0.006..0.046 rows=90 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.04..1.04 rows=4 width=4) (actual time=0.012..0.012 rows=4 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on tblkategori (cost=0.00..1.04 rows=4 width=4) (actual time=0.002..0.005 rows=4 loops=1)\"\n\" Buffers: shared hit=1\"\n\"Total runtime: 15244.038 ms\"This is my  Postgresqlconf :max_connections=50shared_buffers=1024MBwall_buffers=16MBmax_prepared_transactions=0\nwork_mem=50MB\nmaintenance_work_mem=256MBThanksOn Sun, Dec 1, 2013 at 9:39 PM, Torsten Förtsch <[email protected]> wrote:\nOn 01/12/13 13:40, Hengky Liwandouw wrote:\n> Torsten, your 2nd option works now. I dont know maybe copy and paste error. I just want to report that your 2nd option with work_mem=100MB required the same amount of time (about 58 seconds), while my query required 4.9 seconds.\n\n\n>\n> What make this two query so different ?\n>\nWithout the \"explain (analyze,buffers) ...\" it's hard to say. A CTE is\ncurrently a way to trick the query planner because it's planned\nseparately. A subquery on the other hand is integrated in the outer\nquery and planned/optimized as one thing.\n\nIf your planner parameters are correctly set up, the subquery should\nalmost always outrun the CTE. Often, though, not much.\n\nNow, you may ask why CTE then exist at all? There are things that cannot\nbe expressed without them, in particular WITH RECURSIVE.\n\nThe fact that it performs so badly as a subquery indicates that either\nyour table statistics are suboptimal or more probably the planner\nparameters or work_mem.\n\nAnother point I have just noticed, how does it perform if you change\n\n  and extract(... from tanggal)='2013'\n\nto\n\n  and '2013-01-01'::date <= tanggal\n  and tanggal < '2013-01-01'::date + '1 year'::interval\n\nAlso, I think it would be possible to even get rid of the subquery. At\nleast you can get rid of the tanggal and jumlah output from the subquery.\n\nselect s.id, s.nama, t.kodebarang, p.namabarang,\n       sum(case when extract(month from t.tanggal) = 1\n                then t.keluar else 0 end) as jan,\n       sum(case when extract(month from t.tanggal) = 2\n                then t.keluar else 0 end) as feb,\n       ...,\n       sum(t.keluar) as total\n  from tbltransaksi t\n  join tblproduk p on t.kodebarang=p.produkid\n  join tblsupplier s on p.supplierid=s.id\n where (t.jualid is not null or t.returjualid is not null)\n   and '2013-01-01'::date <= t.tanggal\n   and t.tanggal < '2013-01-01'::date + '1 year'::interval\n group by s.id, s.nama, t.kodebarang, p.namabarang\n order by total desc\n limit 1000\n\nwould be interesting to see the \"explain (analyze,buffers)\" output for\nthe query above.\n\nPlease double-check the query. I think it should do exactly the same as\nyour query. But you know, shit happens.\n\nBTW, am I right in assuming that you are from Malaysia or Indonesia? I\nam trying to learn a bit of Malay. I am a complete beginner, though.\n\nSelamat berjaya      (is that possible to wish you success?)\nTorsten", "msg_date": "Mon, 2 Dec 2013 00:41:28 +0700", "msg_from": "Hengky Lie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up the query" } ]
[ { "msg_contents": "We have several independent tables on a multi-core machine serving Select\nqueries. These tables fit into memory; and each Select queries goes over\none table's pages sequentially. In this experiment, there are no indexes or\ntable joins.\n\nWhen we send concurrent Select queries to these tables, query performance\ndoesn't scale out with the number of CPU cores. We find that complex Select\nqueries scale out better than simpler ones. We also find that increasing\nthe block size from 8 KB to 32 KB, or increasing shared_buffers to include\nthe working set mitigates the problem to some extent.\n\nFor our experiments, we chose an 8-core machine with 68 GB of memory from\nAmazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and\nset shared_buffers to 4 GB.\n\nWe then generated 1, 2, 4, and 8 separate tables using the data generator\nfrom the industry standard TPC-H benchmark. Each table we generated, called\nlineitem-1, lineitem-2, etc., had about 750 MB of data. Next, we sent 1, 2,\n4, and 8 concurrent Select queries to these tables to observe the scale out\nbehavior. Our expectation was that since this machine had 8 cores, our run\ntimes would stay constant all throughout. Also, we would have expected the\nmachine's CPU utilization to go up to 100% at 8 concurrent queries. Neither\nof those assumptions held true.\n\nWe found that query run times degraded as we increased the number of\nconcurrent Select queries. Also, CPU utilization flattened out at less than\n50% for the simpler queries. Full results with block size of 8KB are below:\n\n Table select count(*) TPC-H Simple (#6)[2]\n TPC-H Complex (#1)[1]\n1 Table / 1 query 1.5 s 2.5 s\n 8.4 s\n2 Tables / 2 queries 1.5 s 2.5 s\n 8.4 s\n4 Tables / 4 queries 2.0 s 2.9 s\n 8.8 s\n8 Tables / 8 queries 3.3 s 4.0 s\n 9.6 s\n\nWe then increased the block size (BLCKSZ) from 8 KB to 32 KB and recompiled\nPostgreSQL. This change had a positive impact on query completion times.\nHere are the new results with block size of 32 KB:\n\n Table select count(*) TPC-H Simple (#6)[2]\n TPC-H Complex (#1)[1]\n1 Table / 1 query 1.5 s 2.3 s\n 8.0 s\n2 Tables / 2 queries 1.5 s 2.3 s\n 8.0 s\n4 Tables / 4 queries 1.6 s 2.4 s\n 8.1 s\n8 Tables / 8 queries 1.8 s 2.7 s\n 8.3 s\n\nAs a quick side, we also repeated the same experiment on an EC2 instance\nwith 16 CPU cores, and found that the scale out behavior became worse\nthere. (We also tried increasing the shared_buffers to 30 GB. This change\ncompletely solved the scaling out problem on this instance type, but hurt\nour performance on the hi1.4xlarge instances.)\n\nUnfortunately, increasing the block size from 8 to 32 KB has other\nimplications for some of our customers. Could you help us out with the\nproblem here?\n\nWhat can we do to identify the problem's root cause? Can we work around it?\n\nThank you,\nMetin\n\n[1] http://examples.citusdata.com/tpch_queries.html#query-1\n[2] http://examples.citusdata.com/tpch_queries.html#query-6\n\nWe have several independent tables on a multi-core machine serving Select queries. These tables fit into memory; and each Select queries goes over one table's pages sequentially. In this experiment, there are no indexes or table joins.\nWhen we send concurrent Select queries to these tables, query performance doesn't scale out with the number of CPU cores. We find that complex Select queries scale out better than simpler ones. We also find that increasing the block size from 8 KB to 32 KB, or increasing shared_buffers to include the working set mitigates the problem to some extent.\nFor our experiments, we chose an 8-core machine with 68 GB of memory from Amazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and set shared_buffers to 4 GB.\nWe then generated 1, 2, 4, and 8 separate tables using the data generator from the industry standard TPC-H benchmark. Each table we generated, called lineitem-1, lineitem-2, etc., had about 750 MB of data. Next, we sent 1, 2, 4, and 8 concurrent Select queries to these tables to observe the scale out behavior. Our expectation was that since this machine had 8 cores, our run times would stay constant all throughout. Also, we would have expected the machine's CPU utilization to go up to 100% at 8 concurrent queries. Neither of those assumptions held true.\nWe found that query run times degraded as we increased the number of concurrent Select queries. Also, CPU utilization flattened out at less than 50% for the simpler queries. Full results with block size of 8KB are below:\n                         Table select count(*)    TPC-H Simple (#6)[2]    TPC-H Complex (#1)[1]1 Table  / 1 query               1.5 s                    2.5 s                  8.4 s\n2 Tables / 2 queries             1.5 s                    2.5 s                  8.4 s4 Tables / 4 queries             2.0 s                    2.9 s                  8.8 s\n8 Tables / 8 queries             3.3 s                    4.0 s                  9.6 sWe then increased the block size (BLCKSZ) from 8 KB to 32 KB and recompiled PostgreSQL. This change had a positive impact on query completion times. Here are the new results with block size of 32 KB:\n                         Table select count(*)    TPC-H Simple (#6)[2]    TPC-H Complex (#1)[1]1 Table  / 1 query               1.5 s                    2.3 s                  8.0 s\n2 Tables / 2 queries             1.5 s                    2.3 s                  8.0 s4 Tables / 4 queries             1.6 s                    2.4 s                  8.1 s\n8 Tables / 8 queries             1.8 s                    2.7 s                  8.3 sAs a quick side, we also repeated the same experiment on an EC2 instance with 16 CPU cores, and found that the scale out behavior became worse there. (We also tried increasing the shared_buffers to 30 GB. This change completely solved the scaling out problem on this instance type, but hurt our performance on the hi1.4xlarge instances.)\nUnfortunately, increasing the block size from 8 to 32 KB has other implications for some of our customers. Could you help us out with the problem here?\nWhat can we do to identify the problem's root cause? Can we work around it?\nThank you,Metin[1] http://examples.citusdata.com/tpch_queries.html#query-1\n[2] http://examples.citusdata.com/tpch_queries.html#query-6", "msg_date": "Tue, 3 Dec 2013 15:41:43 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel Select query performance and shared buffers" }, { "msg_contents": "On Tue, Dec 3, 2013 at 7:11 PM, Metin Doslu <[email protected]> wrote:\n> We have several independent tables on a multi-core machine serving Select\n> queries. These tables fit into memory; and each Select queries goes over one\n> table's pages sequentially. In this experiment, there are no indexes or\n> table joins.\n>\n> When we send concurrent Select queries to these tables, query performance\n> doesn't scale out with the number of CPU cores. We find that complex Select\n> queries scale out better than simpler ones. We also find that increasing the\n> block size from 8 KB to 32 KB, or increasing shared_buffers to include the\n> working set mitigates the problem to some extent.\n>\n> For our experiments, we chose an 8-core machine with 68 GB of memory from\n> Amazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and set\n> shared_buffers to 4 GB.\n>\n> We then generated 1, 2, 4, and 8 separate tables using the data generator\n> from the industry standard TPC-H benchmark. Each table we generated, called\n> lineitem-1, lineitem-2, etc., had about 750 MB of data.\n I think all of this data cannot fit in shared_buffers, you might\nwant to increase shared_buffers\n to larger size (not 30GB but close to your data size) to see how it behaves\n\n\n> Next, we sent 1, 2,\n> 4, and 8 concurrent Select queries to these tables to observe the scale out\n> behavior. Our expectation was that since this machine had 8 cores, our run\n> times would stay constant all throughout. Also, we would have expected the\n> machine's CPU utilization to go up to 100% at 8 concurrent queries. Neither\n> of those assumptions held true.\n\nYou queries have Aggregation, ORDER/GROUP BY, so there is a chance\nthat I/O can happen for those operation's\nif PG doesn't have sufficient memory (work_mem) to perform such operation.\n\n> As a quick side, we also repeated the same experiment on an EC2 instance\n> with 16 CPU cores, and found that the scale out behavior became worse there.\n> (We also tried increasing the shared_buffers to 30 GB. This change\n> completely solved the scaling out problem on this instance type, but hurt\n> our performance on the hi1.4xlarge instances.)\n\nInstead of 30GB, you can try with lesser value, but it should be close\nto your data size.\n\n> Unfortunately, increasing the block size from 8 to 32 KB has other\n> implications for some of our customers. Could you help us out with the\n> problem here?\n>\n> What can we do to identify the problem's root cause? Can we work around it?\n\nI think without finding the real cause, it would be difficult to get\nthe reasonable workaround.\nCan you simplify your queries (simple scan or in other words no\naggregation or other things) to see how\nthey behave in your env., once you are able to see simple queries\nscaling as per your expectation, you\ncan try with complex one's.\n\nNote - post this on pgsql-performance as well.\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 4 Dec 2013 09:27:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Wed, Dec 4, 2013 at 12:57 AM, Amit Kapila <[email protected]> wrote:\n>> As a quick side, we also repeated the same experiment on an EC2 instance\n>> with 16 CPU cores, and found that the scale out behavior became worse there.\n>> (We also tried increasing the shared_buffers to 30 GB. This change\n>> completely solved the scaling out problem on this instance type, but hurt\n>> our performance on the hi1.4xlarge instances.)\n>\n> Instead of 30GB, you can try with lesser value, but it should be close\n> to your data size.\n\nThe OS cache should have provided a similar function.\n\nIn fact, larger shared buffers shouldn't have made a difference if the\nmain I/O pattern are sequential scans, because they use a ring buffer.\n\nCan we have the explain analyze of those queries, postgres\nconfiguration, perhaps vmstat output during execution?\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 4 Dec 2013 03:10:20 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> I think all of this data cannot fit in shared_buffers, you might want\nto increase shared_buffers\n> to larger size (not 30GB but close to your data size) to see how it\nbehaves.\n\nWhen I use shared_buffers larger than my data size such as 10 GB, results\nscale nearly as expected at least for this instance type.\n\n> You queries have Aggregation, ORDER/GROUP BY, so there is a chance\n> that I/O can happen for those operation's\n> if PG doesn't have sufficient memory (work_mem) to perform such operation.\n\nI used work_mem as 32 MB, this should be enough for these queries. I also\ntested with higher values of work_mem, and didn't obverse any difference.\n\n> Can you simplify your queries (simple scan or in other words no\n> aggregation or other things) to see how\n> they behave in your env., once you are able to see simple queries\n> scaling as per your expectation, you\n> can try with complex one's.\n\nActually we observe problem when queries start to get simpler such as\nselect count(*). Here is the results table in more compact format:\n\n select count(*) TPC-H Simple(#6) TPC-H Complex(#1)\n1 Table / 1 query 1.5 s 2.5 s 8.4 s\n2 Tables/ 2 queries 1.5 s 2.5 s 8.4 s\n4 Tables/ 4 queries 2.0 s 2.9 s 8.8 s\n8 Tables/ 8 queries 3.3 s 4.0 s 9.6 s\n\n> Can we have the explain analyze of those queries, postgres\n> configuration, perhaps vmstat output during execution?\n\npostgres=# explain analyze SELECT count(*) from lineitem_1;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=199645.01..199645.02 rows=1 width=0) (actual\ntime=11317.391..11317.393 rows=1 loops=1)\n -> Seq Scan on lineitem_1 (cost=0.00..184641.81 rows=6001281 width=0)\n(actual time=0.011..5805.255 rows=6001215 loops=1)\n Total runtime: 11317.440 ms\n(3 rows)\n\npostgres=# explain analyze SELECT\npostgres-# sum(l_extendedprice * l_discount) as revenue\npostgres-# FROM\npostgres-# lineitem_1\npostgres-# WHERE\npostgres-# l_shipdate >= date '1994-01-01'\npostgres-# AND l_shipdate < date '1994-01-01' + interval '1' year\npostgres-# AND l_discount between 0.06 - 0.01 AND 0.06 + 0.01\npostgres-# AND l_quantity < 24;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=260215.36..260215.37 rows=1 width=16) (actual\ntime=1751.775..1751.776 rows=1 loops=1)\n -> Seq Scan on lineitem_1 (cost=0.00..259657.82 rows=111508 width=16)\n(actual time=0.031..1630.449 rows=114160 loops=1)\n Filter: ((l_shipdate >= '1994-01-01'::date) AND (l_shipdate <\n'1995-01-01 00:00:00'::timestamp without time zone) AND (l_discount >=\n0.05::double precision) AND (l_discount <= 0.07::double precision) AND\n (l_quantity < 24::double precision))\n Rows Removed by Filter: 5887055\n Total runtime: 1751.830 ms\n(5 rows)\n\npostgres=# explain analyze SELECT\n l_returnflag,\n l_linestatus,\n sum(l_quantity) as sum_qty,\n sum(l_extendedprice) as sum_base_price,\n sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,\n sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,\n avg(l_quantity) as avg_qty,\n avg(l_extendedprice) as avg_price,\n avg(l_discount) as avg_disc,\n count(*) as count_order\nFROM\n lineitem_1\nWHERE\n l_shipdate <= date '1998-12-01' - interval '90' day\nGROUP BY\n l_returnflag,\n l_linestatus\nORDER BY\n l_returnflag,\n l_linestatus;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=436342.68..436342.69 rows=6 width=36) (actual\ntime=18720.932..18720.936 rows=4 loops=1)\n Sort Key: l_returnflag, l_linestatus\n Sort Method: quicksort Memory: 25kB\n -> HashAggregate (cost=436342.49..436342.60 rows=6 width=36) (actual\ntime=18720.887..18720.892 rows=4 loops=1)\n -> Seq Scan on lineitem_1 (cost=0.00..199645.01 rows=5917437\nwidth=36) (actual time=0.011..6754.619 rows=5916591 loops=1)\n Filter: (l_shipdate <= '1998-09-02 00:00:00'::timestamp\nwithout time zone)\n Rows Removed by Filter: 84624\n Total runtime: 18721.021 ms\n(8 rows)\n\n\nHere are the results of \"vmstat 1\" while running 8 parallel TPC-H Simple\n(#6) queries: Although there is no need for I/O, \"wa\" fluctuates between 0\nand 1.\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 0 0 0 30093568 84892 38723896 0 0 0 0 22 14 0 0\n100 0 0\n 8 1 0 30043056 84892 38723896 0 0 0 0 27080 52708 16\n14 70 0 0\n 8 1 0 30006600 84892 38723896 0 0 0 0 44952 118286 43\n44 12 1 0\n 8 0 0 29986264 84900 38723896 0 0 0 20 28043 95934 49\n42 8 1 0\n 7 0 0 29991976 84900 38723896 0 0 0 0 8308 73641 52\n42 6 0 0\n 0 0 0 30091828 84900 38723896 0 0 0 0 3996 30978 23\n24 53 0 0\n 0 0 0 30091968 84900 38723896 0 0 0 0 17 23 0 0\n100 0 0\n\nI installed PostgreSQL 9.3.1 from source and in postgres configuration file\nI only changed shared buffers (4 GB) and work_mem (32 MB).\n\n>   I think all of this data cannot fit in shared_buffers, you might want to increase shared_buffers>   to larger size (not 30GB but close to your data size) to see how it behaves.\nWhen I use shared_buffers larger than my data size such as 10 GB, results scale nearly as expected at least for this instance type.> You queries have Aggregation, ORDER/GROUP BY, so there is a chance\n> that I/O can happen for those operation's> if PG doesn't have sufficient memory (work_mem) to perform such operation.I used work_mem as 32 MB, this should be enough for these queries. I also tested with higher values of work_mem, and didn't obverse any difference.\n> Can you simplify your queries (simple scan or in other words no> aggregation or other things) to see how> they behave in your env., once you are able to see simple queries\n> scaling as per your expectation, you> can try with complex one's.Actually we observe problem when queries start to get simpler such as select count(*). Here is the results table in more compact format:\n                  select count(*) TPC-H Simple(#6) TPC-H Complex(#1)1 Table / 1 query      1.5 s            2.5 s           8.4 s\n2 Tables/ 2 queries    1.5 s            2.5 s           8.4 s4 Tables/ 4 queries    2.0 s            2.9 s           8.8 s\n8 Tables/ 8 queries    3.3 s            4.0 s           9.6 s> Can we have the explain analyze of those queries, postgres\n> configuration, perhaps vmstat output during execution?\npostgres=# explain analyze SELECT count(*) from lineitem_1;                                                          QUERY PLAN                                                          \n------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=199645.01..199645.02 rows=1 width=0) (actual time=11317.391..11317.393 rows=1 loops=1)\n   ->  Seq Scan on lineitem_1  (cost=0.00..184641.81 rows=6001281 width=0) (actual time=0.011..5805.255 rows=6001215 loops=1) Total runtime: 11317.440 ms\n(3 rows)postgres=# explain analyze SELECTpostgres-#     sum(l_extendedprice * l_discount) as revenue\npostgres-# FROMpostgres-#     lineitem_1postgres-# WHEREpostgres-#     l_shipdate >= date '1994-01-01'\npostgres-#     AND l_shipdate < date '1994-01-01' + interval '1' yearpostgres-#     AND l_discount between 0.06 - 0.01 AND 0.06 + 0.01postgres-#     AND l_quantity < 24;\n                                                 QUERY PLAN                                                                           ------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=260215.36..260215.37 rows=1 width=16) (actual time=1751.775..1751.776 rows=1 loops=1)   ->  Seq Scan on lineitem_1  (cost=0.00..259657.82 rows=111508 width=16) (actual time=0.031..1630.449 rows=114160 loops=1)\n         Filter: ((l_shipdate >= '1994-01-01'::date) AND (l_shipdate < '1995-01-01 00:00:00'::timestamp without time zone) AND (l_discount >= 0.05::double precision) AND (l_discount <= 0.07::double precision) AND\n (l_quantity < 24::double precision))         Rows Removed by Filter: 5887055 Total runtime: 1751.830 ms\n(5 rows)postgres=# explain analyze SELECT    l_returnflag,    l_linestatus,\n    sum(l_quantity) as sum_qty,    sum(l_extendedprice) as sum_base_price,    sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,\n    sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,    avg(l_quantity) as avg_qty,    avg(l_extendedprice) as avg_price,\n    avg(l_discount) as avg_disc,    count(*) as count_orderFROM    lineitem_1WHERE\n    l_shipdate <= date '1998-12-01' - interval '90' dayGROUP BY    l_returnflag,    l_linestatus\nORDER BY    l_returnflag,    l_linestatus;                                                             QUERY PLAN                                                              \n------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=436342.68..436342.69 rows=6 width=36) (actual time=18720.932..18720.936 rows=4 loops=1)\n   Sort Key: l_returnflag, l_linestatus   Sort Method: quicksort  Memory: 25kB   ->  HashAggregate  (cost=436342.49..436342.60 rows=6 width=36) (actual time=18720.887..18720.892 rows=4 loops=1)\n         ->  Seq Scan on lineitem_1  (cost=0.00..199645.01 rows=5917437 width=36) (actual time=0.011..6754.619 rows=5916591 loops=1)               Filter: (l_shipdate <= '1998-09-02 00:00:00'::timestamp without time zone)\n               Rows Removed by Filter: 84624 Total runtime: 18721.021 ms(8 rows)\nHere are the results of \"vmstat 1\" while running 8 parallel TPC-H Simple (#6) queries:  Although there is no need for I/O, \"wa\" fluctuates between 0 and 1. \nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st\n 0  0      0 30093568  84892 38723896    0    0     0     0   22   14  0  0 100  0  0  8  1      0 30043056  84892 38723896    0    0     0     0 27080 52708 16 14 70  0  0 \n 8  1      0 30006600  84892 38723896    0    0     0     0 44952 118286 43 44 12  1  0  8  0      0 29986264  84900 38723896    0    0     0    20 28043 95934 49 42  8  1  0 \n 7  0      0 29991976  84900 38723896    0    0     0     0 8308 73641 52 42  6  0  0  0  0      0 30091828  84900 38723896    0    0     0     0 3996 30978 23 24 53  0  0 \n 0  0      0 30091968  84900 38723896    0    0     0     0   17   23  0  0 100  0  0 I installed PostgreSQL 9.3.1 from source and in postgres configuration file I only changed shared buffers (4 GB) and work_mem (32 MB).", "msg_date": "Wed, 4 Dec 2013 14:19:03 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu <[email protected]> wrote:\n>\n> Here are the results of \"vmstat 1\" while running 8 parallel TPC-H Simple\n> (#6) queries: Although there is no need for I/O, \"wa\" fluctuates between 0\n> and 1.\n>\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu-----\n> r b swpd free buff cache si so bi bo in cs us sy id wa st\n> 0 0 0 30093568 84892 38723896 0 0 0 0 22 14 0 0 100 0 0\n> 8 1 0 30043056 84892 38723896 0 0 0 0 27080 52708 16 14 70 0 0\n> 8 1 0 30006600 84892 38723896 0 0 0 0 44952 118286 43 44 12 1 0\n> 8 0 0 29986264 84900 38723896 0 0 0 20 28043 95934 49 42 8 1 0\n> 7 0 0 29991976 84900 38723896 0 0 0 0 8308 73641 52 42 6 0 0\n> 0 0 0 30091828 84900 38723896 0 0 0 0 3996 30978 23 24 53 0 0\n> 0 0 0 30091968 84900 38723896 0 0 0 0 17 23 0 0 100 0 0\n\n\nNotice the huge %sy\n\nWhat kind of VM are you using? HVM or paravirtual?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Dec 2013 14:27:10 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Parallel Select query performance and shared buffers" }, { "msg_contents": "On 2013-12-04 14:27:10 -0200, Claudio Freire wrote:\n> On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu <[email protected]> wrote:\n> >\n> > Here are the results of \"vmstat 1\" while running 8 parallel TPC-H Simple\n> > (#6) queries: Although there is no need for I/O, \"wa\" fluctuates between 0\n> > and 1.\n> >\n> > procs -----------memory---------- ---swap-- -----io---- --system--\n> > -----cpu-----\n> > r b swpd free buff cache si so bi bo in cs us sy id wa st\n> > 0 0 0 30093568 84892 38723896 0 0 0 0 22 14 0 0 100 0 0\n> > 8 1 0 30043056 84892 38723896 0 0 0 0 27080 52708 16 14 70 0 0\n> > 8 1 0 30006600 84892 38723896 0 0 0 0 44952 118286 43 44 12 1 0\n> > 8 0 0 29986264 84900 38723896 0 0 0 20 28043 95934 49 42 8 1 0\n> > 7 0 0 29991976 84900 38723896 0 0 0 0 8308 73641 52 42 6 0 0\n> > 0 0 0 30091828 84900 38723896 0 0 0 0 3996 30978 23 24 53 0 0\n> > 0 0 0 30091968 84900 38723896 0 0 0 0 17 23 0 0 100 0 0\n> \n> \n> Notice the huge %sy\n\nMy bet is on transparent hugepage defragmentation. Alternatively it's\nscheduler overhead, due to superflous context switches around the buffer\nmapping locks.\n\nI'd strongly suggest doing a \"perf record -g -a <wait a bit, ctrl-c>;\nperf report\" run to check what's eating up the time.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Dec 2013 17:33:50 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Parallel Select query performance and shared buffers" }, { "msg_contents": ">Notice the huge %sy\n>What kind of VM are you using? HVM or paravirtual?\n\nThis instance is paravirtual.\n\n>Notice the huge %sy>What kind of VM are you using? HVM or paravirtual?This instance is paravirtual.", "msg_date": "Wed, 4 Dec 2013 18:35:25 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> I'd strongly suggest doing a \"perf record -g -a <wait a bit, ctrl-c>;\n> perf report\" run to check what's eating up the time.\n\nHere is one example:\n\n+ 38.87% swapper [kernel.kallsyms] [k] hypercall_page\n+ 9.32% postgres [kernel.kallsyms] [k] hypercall_page\n+ 6.80% postgres [kernel.kallsyms] [k] xen_set_pte_at\n+ 5.83% postgres [kernel.kallsyms] [k] copy_user_generic_string\n+ 2.06% postgres [kernel.kallsyms] [k] file_read_actor\n+ 1.89% postgres postgres [.] heapgettup_pagemode\n+ 1.83% postgres postgres [.] hash_search_with_hash_value\n+ 1.33% postgres [kernel.kallsyms] [k] get_phys_to_machine\n+ 1.25% postgres [kernel.kallsyms] [k] find_get_page\n+ 1.00% postgres postgres [.] heapgetpage\n+ 0.99% postgres [kernel.kallsyms] [k] radix_tree_lookup_element\n+ 0.98% postgres postgres [.] advance_aggregates\n+ 0.96% postgres postgres [.] ExecProject\n+ 0.94% postgres postgres [.] advance_transition_function\n+ 0.88% postgres postgres [.] ExecScan\n+ 0.87% postgres postgres [.] HeapTupleSatisfiesMVCC\n+ 0.86% postgres postgres [.] LWLockAcquire\n+ 0.82% postgres [kernel.kallsyms] [k] put_page\n+ 0.82% postgres postgres [.] MemoryContextReset\n+ 0.80% postgres postgres [.] SeqNext\n+ 0.78% postgres [kernel.kallsyms] [k] pte_mfn_to_pfn\n+ 0.69% postgres postgres [.] ExecClearTuple\n+ 0.57% postgres postgres [.] ExecProcNode\n+ 0.54% postgres postgres [.] heap_getnext\n+ 0.53% postgres postgres [.] LWLockRelease\n+ 0.53% postgres postgres [.] ExecStoreTuple\n+ 0.51% postgres libc-2.12.so [.] __GI___libc_read\n+ 0.42% postgres [kernel.kallsyms] [k] xen_spin_lock\n+ 0.40% postgres postgres [.] ReadBuffer_common\n+ 0.38% postgres [kernel.kallsyms] [k] __do_fault\n+ 0.37% postgres [kernel.kallsyms] [k] shmem_fault\n+ 0.37% postgres [kernel.kallsyms] [k] unmap_single_vma\n+ 0.35% postgres [kernel.kallsyms] [k] __wake_up_bit\n+ 0.33% postgres postgres [.] StrategyGetBuffer\n+ 0.33% postgres [kernel.kallsyms] [k] set_page_dirty\n+ 0.33% postgres [kernel.kallsyms] [k] handle_pte_fault\n+ 0.33% postgres postgres [.] ExecAgg\n+ 0.31% postgres postgres [.] XidInMVCCSnapshot\n+ 0.31% postgres [kernel.kallsyms] [k] __audit_syscall_entry\n+ 0.31% postgres postgres [.] CheckForSerializableConflictOut\n+ 0.29% postgres [kernel.kallsyms] [k] handle_mm_fault\n+ 0.25% postgres [kernel.kallsyms] [k] shmem_getpage_gfp\n\n\n\nOn Wed, Dec 4, 2013 at 6:33 PM, Andres Freund <[email protected]>wrote:\n\n> On 2013-12-04 14:27:10 -0200, Claudio Freire wrote:\n> > On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu <[email protected]> wrote:\n> > >\n> > > Here are the results of \"vmstat 1\" while running 8 parallel TPC-H\n> Simple\n> > > (#6) queries: Although there is no need for I/O, \"wa\" fluctuates\n> between 0\n> > > and 1.\n> > >\n> > > procs -----------memory---------- ---swap-- -----io---- --system--\n> > > -----cpu-----\n> > > r b swpd free buff cache si so bi bo in\n> cs us sy id wa st\n> > > 0 0 0 30093568 84892 38723896 0 0 0 0 22\n> 14 0 0 100 0 0\n> > > 8 1 0 30043056 84892 38723896 0 0 0 0 27080\n> 52708 16 14 70 0 0\n> > > 8 1 0 30006600 84892 38723896 0 0 0 0 44952\n> 118286 43 44 12 1 0\n> > > 8 0 0 29986264 84900 38723896 0 0 0 20 28043\n> 95934 49 42 8 1 0\n> > > 7 0 0 29991976 84900 38723896 0 0 0 0 8308\n> 73641 52 42 6 0 0\n> > > 0 0 0 30091828 84900 38723896 0 0 0 0 3996\n> 30978 23 24 53 0 0\n> > > 0 0 0 30091968 84900 38723896 0 0 0 0 17\n> 23 0 0 100 0 0\n> >\n> >\n> > Notice the huge %sy\n>\n> My bet is on transparent hugepage defragmentation. Alternatively it's\n> scheduler overhead, due to superflous context switches around the buffer\n> mapping locks.\n>\n> I'd strongly suggest doing a \"perf record -g -a <wait a bit, ctrl-c>;\n> perf report\" run to check what's eating up the time.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> --\n> Andres Freund http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\n> I'd strongly suggest doing a \"perf record -g -a <wait a bit, ctrl-c>;> perf report\" run to check what's eating up the time.Here is one example:\n+  38.87%   swapper  [kernel.kallsyms]   [k] hypercall_page+   9.32%  postgres  [kernel.kallsyms]   [k] hypercall_page+   6.80%  postgres  [kernel.kallsyms]   [k] xen_set_pte_at\n+   5.83%  postgres  [kernel.kallsyms]   [k] copy_user_generic_string+   2.06%  postgres  [kernel.kallsyms]   [k] file_read_actor+   1.89%  postgres  postgres            [.] heapgettup_pagemode\n+   1.83%  postgres  postgres            [.] hash_search_with_hash_value+   1.33%  postgres  [kernel.kallsyms]   [k] get_phys_to_machine+   1.25%  postgres  [kernel.kallsyms]   [k] find_get_page\n+   1.00%  postgres  postgres            [.] heapgetpage+   0.99%  postgres  [kernel.kallsyms]   [k] radix_tree_lookup_element+   0.98%  postgres  postgres            [.] advance_aggregates\n+   0.96%  postgres  postgres            [.] ExecProject+   0.94%  postgres  postgres            [.] advance_transition_function+   0.88%  postgres  postgres            [.] ExecScan+   0.87%  postgres  postgres            [.] HeapTupleSatisfiesMVCC\n+   0.86%  postgres  postgres            [.] LWLockAcquire+   0.82%  postgres  [kernel.kallsyms]   [k] put_page+   0.82%  postgres  postgres            [.] MemoryContextReset+   0.80%  postgres  postgres            [.] SeqNext\n+   0.78%  postgres  [kernel.kallsyms]   [k] pte_mfn_to_pfn+   0.69%  postgres  postgres            [.] ExecClearTuple+   0.57%  postgres  postgres            [.] ExecProcNode+   0.54%  postgres  postgres            [.] heap_getnext\n+   0.53%  postgres  postgres            [.] LWLockRelease+   0.53%  postgres  postgres            [.] ExecStoreTuple+   0.51%  postgres  libc-2.12.so        [.] __GI___libc_read\n+   0.42%  postgres  [kernel.kallsyms]   [k] xen_spin_lock+   0.40%  postgres  postgres            [.] ReadBuffer_common+   0.38%  postgres  [kernel.kallsyms]   [k] __do_fault+   0.37%  postgres  [kernel.kallsyms]   [k] shmem_fault\n+   0.37%  postgres  [kernel.kallsyms]   [k] unmap_single_vma+   0.35%  postgres  [kernel.kallsyms]   [k] __wake_up_bit+   0.33%  postgres  postgres            [.] StrategyGetBuffer+   0.33%  postgres  [kernel.kallsyms]   [k] set_page_dirty\n+   0.33%  postgres  [kernel.kallsyms]   [k] handle_pte_fault+   0.33%  postgres  postgres            [.] ExecAgg+   0.31%  postgres  postgres            [.] XidInMVCCSnapshot+   0.31%  postgres  [kernel.kallsyms]   [k] __audit_syscall_entry\n+   0.31%  postgres  postgres            [.] CheckForSerializableConflictOut+   0.29%  postgres  [kernel.kallsyms]   [k] handle_mm_fault+   0.25%  postgres  [kernel.kallsyms]   [k] shmem_getpage_gfp\nOn Wed, Dec 4, 2013 at 6:33 PM, Andres Freund <[email protected]> wrote:\nOn 2013-12-04 14:27:10 -0200, Claudio Freire wrote:\n> On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu <[email protected]> wrote:\n> >\n> > Here are the results of \"vmstat 1\" while running 8 parallel TPC-H Simple\n> > (#6) queries:  Although there is no need for I/O, \"wa\" fluctuates between 0\n> > and 1.\n> >\n> > procs -----------memory---------- ---swap-- -----io---- --system--\n> > -----cpu-----\n> >  r  b   swpd   free   buff    cache     si   so    bi    bo    in     cs us sy  id wa st\n> >  0  0      0 30093568  84892 38723896    0    0     0     0    22     14  0  0 100  0  0\n> >  8  1      0 30043056  84892 38723896    0    0     0     0 27080  52708 16 14  70  0  0\n> >  8  1      0 30006600  84892 38723896    0    0     0     0 44952 118286 43 44  12  1  0\n> >  8  0      0 29986264  84900 38723896    0    0     0    20 28043  95934 49 42   8  1  0\n> >  7  0      0 29991976  84900 38723896    0    0     0     0  8308  73641 52 42   6  0  0\n> >  0  0      0 30091828  84900 38723896    0    0     0     0  3996  30978 23 24  53  0  0\n> >  0  0      0 30091968  84900 38723896    0    0     0     0    17    23   0  0 100  0  0\n>\n>\n> Notice the huge %sy\n\nMy bet is on transparent hugepage defragmentation. Alternatively it's\nscheduler overhead, due to superflous context switches around the buffer\nmapping locks.\n\nI'd strongly suggest doing a \"perf record -g -a <wait a bit, ctrl-c>;\nperf report\" run to check what's eating up the time.\n\nGreetings,\n\nAndres Freund\n\n--\n Andres Freund                     http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 4 Dec 2013 18:43:35 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On 2013-12-04 18:43:35 +0200, Metin Doslu wrote:\n> > I'd strongly suggest doing a \"perf record -g -a <wait a bit, ctrl-c>;\n> > perf report\" run to check what's eating up the time.\n> \n> Here is one example:\n> \n> + 38.87% swapper [kernel.kallsyms] [k] hypercall_page\n> + 9.32% postgres [kernel.kallsyms] [k] hypercall_page\n> + 6.80% postgres [kernel.kallsyms] [k] xen_set_pte_at\n\nAll that time is spent in your virtualization solution. One thing to try\nis to look on the host system, sometimes profiles there can be more\nmeaningful.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 4 Dec 2013 17:54:10 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Wed, Dec 4, 2013 at 1:54 PM, Andres Freund <[email protected]> wrote:\n> On 2013-12-04 18:43:35 +0200, Metin Doslu wrote:\n>> > I'd strongly suggest doing a \"perf record -g -a <wait a bit, ctrl-c>;\n>> > perf report\" run to check what's eating up the time.\n>>\n>> Here is one example:\n>>\n>> + 38.87% swapper [kernel.kallsyms] [k] hypercall_page\n>> + 9.32% postgres [kernel.kallsyms] [k] hypercall_page\n>> + 6.80% postgres [kernel.kallsyms] [k] xen_set_pte_at\n>\n> All that time is spent in your virtualization solution. One thing to try\n> is to look on the host system, sometimes profiles there can be more\n> meaningful.\n\nYou cannot profile the host on EC2.\n\nYou could try HVM. I've noticed it fare better under heavy CPU load,\nand it's not fully-HVM (it still uses paravirtualized network and\nI/O).\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 4 Dec 2013 16:00:40 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> You could try HVM. I've noticed it fare better under heavy CPU load,\n> and it's not fully-HVM (it still uses paravirtualized network and\n> I/O).\n\nI already tried with HVM (cc2.8xlarge instance on Amazon EC2) and observed\nsame problem.\n\n> You could try HVM. I've noticed it fare better  under heavy CPU load,> and it's not fully-HVM (it still uses paravirtualized network and> I/O).\nI already tried with HVM (cc2.8xlarge instance on Amazon EC2) and observed same problem.", "msg_date": "Wed, 4 Dec 2013 20:03:26 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Parallel Select query performance and shared buffers" }, { "msg_contents": "On 2013-12-04 16:00:40 -0200, Claudio Freire wrote:\n> On Wed, Dec 4, 2013 at 1:54 PM, Andres Freund <[email protected]> wrote:\n> > All that time is spent in your virtualization solution. One thing to try\n> > is to look on the host system, sometimes profiles there can be more\n> > meaningful.\n> \n> You cannot profile the host on EC2.\n\nDidn't follow the thread from the start. So, this is EC2? Have you\nchecked, with a recent enough version of top or whatever, how much time\nis reported as \"stolen\"?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 4 Dec 2013 19:04:11 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> Didn't follow the thread from the start. So, this is EC2? Have you\n> checked, with a recent enough version of top or whatever, how much time\n> is reported as \"stolen\"?\n\nYes, this EC2. \"stolen\" is randomly reported as 1, mostly as 0.\n\n> Didn't follow the thread from the start. So, this is EC2? Have you\n> checked, with a recent enough version of top or whatever, how much time> is reported as \"stolen\"?\nYes, this EC2. \"stolen\" is randomly reported as 1, mostly as 0.", "msg_date": "Wed, 4 Dec 2013 20:06:27 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "Here are some extra information:\n\n- When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is\ndisappeared for 8 core machines and come back with 16 core machines on\nAmazon EC2. Would it be related with PostgreSQL locking mechanism?\n\n- I tried this test with 4 core machines including my personel computer and\nsome other instances on Amazon EC2, I didn't see this problem with 4 core\nmachines. I started to see this problem in PostgreSQL when core count is 8\nor more.\n\n- Here are the results of \"vmstat 1\" while running 8 parallel select\ncount(*). Normally I would expect zero idle time.\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 0 0 0 29838640 94000 38954740 0 0 0 0 22 21 0 0\n100 0 0\n 7 2 0 29788416 94000 38954740 0 0 0 0 53922 108490 14\n24 60 1 1\n 5 0 0 29747248 94000 38954740 0 0 0 0 68008 164571 22\n48 27 2 1\n 8 0 0 29725796 94000 38954740 0 0 0 0 43587 150574 28\n54 16 1 1\n 0 0 0 29838328 94000 38954740 0 0 0 0 15584 100459 26\n55 18 1 0\n 0 0 0 29838328 94000 38954740 0 0 0 0 42 15 0 0\n100 0 0\n\n- When I run 8 parallel wc command or other scripts, they scale out as\nexpected and they utilize all cpu. This leads me to think that problem is\nrelated with PostgreSQL instead of OS.\n\nHere are some extra information:- When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is disappeared for 8 core machines and come back with 16 core machines on Amazon EC2. Would it be related with PostgreSQL locking mechanism?\n- I tried this test with 4 core machines including my personel computer and some other instances on Amazon EC2, I didn't see this problem with 4 core machines. I started to see this problem in PostgreSQL when core count is 8 or more.\n- Here are the results of \"vmstat 1\" while running 8 parallel select count(*). Normally I would expect zero idle time.\nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----\n r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st 0  0      0 29838640  94000 38954740    0    0     0     0   22   21  0  0 100  0  0 \n 7  2      0 29788416  94000 38954740    0    0     0     0 53922 108490 14 24 60  1  1  5  0      0 29747248  94000 38954740    0    0     0     0 68008 164571 22 48 27  2  1 \n 8  0      0 29725796  94000 38954740    0    0     0     0 43587 150574 28 54 16  1  1  0  0      0 29838328  94000 38954740    0    0     0     0 15584 100459 26 55 18  1  0 \n 0  0      0 29838328  94000 38954740    0    0     0     0   42   15  0  0 100  0  0 \n- When I run 8 parallel wc command or other scripts, they scale out as expected and they utilize all cpu. This leads me to think that problem is related with PostgreSQL instead of OS.", "msg_date": "Wed, 4 Dec 2013 20:19:55 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On 2013-12-04 20:19:55 +0200, Metin Doslu wrote:\n> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is\n> disappeared for 8 core machines and come back with 16 core machines on\n> Amazon EC2. Would it be related with PostgreSQL locking mechanism?\n\nYou could try my lwlock-scalability improvement patches - for some\nworkloads here, the improvements have been rather noticeable. Which\nversion are you testing?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 4 Dec 2013 19:26:01 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Parallel Select query performance and shared buffers" }, { "msg_contents": "> You could try my lwlock-scalability improvement patches - for some\n> workloads here, the improvements have been rather noticeable. Which\n> version are you testing?\n\nI'm testing with PostgreSQL 9.3.1.\n\n> You could try my lwlock-scalability improvement patches - for some\n> workloads here, the improvements have been rather noticeable. Which> version are you testing?\nI'm testing with PostgreSQL 9.3.1.", "msg_date": "Wed, 4 Dec 2013 20:28:22 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Wed, Dec 4, 2013 at 10:40 AM, Claudio Freire <[email protected]> wrote:\n> On Wed, Dec 4, 2013 at 12:57 AM, Amit Kapila <[email protected]> wrote:\n>>> As a quick side, we also repeated the same experiment on an EC2 instance\n>>> with 16 CPU cores, and found that the scale out behavior became worse there.\n>>> (We also tried increasing the shared_buffers to 30 GB. This change\n>>> completely solved the scaling out problem on this instance type, but hurt\n>>> our performance on the hi1.4xlarge instances.)\n>>\n>> Instead of 30GB, you can try with lesser value, but it should be close\n>> to your data size.\n>\n> The OS cache should have provided a similar function.\n\n The performance cannot be same when those pages are in shared buffers as\n a. OS can flush those pages\n b. anyway loading it again in shared buffers will have some overhead.\n\n> In fact, larger shared buffers shouldn't have made a difference if the\n> main I/O pattern are sequential scans, because they use a ring buffer.\n\n Yeah, this is right, but then why he is able to see scaling when he\nincreased shared buffer's\n to larger value.\n\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 5 Dec 2013 09:33:41 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Wed, Dec 4, 2013 at 11:49 PM, Metin Doslu <[email protected]> wrote:\n> Here are some extra information:\n>\n> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is\n> disappeared for 8 core machines and come back with 16 core machines on\n> Amazon EC2. Would it be related with PostgreSQL locking mechanism?\n\n I think here there is a good chance of improvement with the patch\nsuggested by Andres in this thread, but\n still i think it might not completely resolve the current problem as\nthere will be overhead of associating data\n with shared buffers.\n\n Currently NUM_BUFFER_PARTITIONS is fixed, so may be auto tuning it\nbased on some parameter's can\n help such situations.\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 09:46:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is\n> disappeared for 8 core machines and come back with 16 core machines on\n> Amazon EC2. Would it be related with PostgreSQL locking mechanism?\n\nIf we build with -DLWLOCK_STATS to print locking stats from PostgreSQL, we\nsee tons of contention with default value of NUM_BUFFER_PARTITIONS which is\n16:\n\n$ tail -f /tmp/logfile | grep lwlock | egrep -v \"blk 0\"\n...\nPID 15965 lwlock 0: shacq 0 exacq 33 blk 2\nPID 15965 lwlock 34: shacq 14010 exacq 27134 blk 6192\nPID 15965 lwlock 35: shacq 14159 exacq 27397 blk 5426\nPID 15965 lwlock 36: shacq 14111 exacq 27322 blk 4959\nPID 15965 lwlock 37: shacq 14211 exacq 27507 blk 4370\nPID 15965 lwlock 38: shacq 14110 exacq 27294 blk 3980\nPID 15965 lwlock 39: shacq 13962 exacq 27027 blk 3719\nPID 15965 lwlock 40: shacq 14023 exacq 27156 blk 3273\nPID 15965 lwlock 41: shacq 14107 exacq 27309 blk 3201\nPID 15965 lwlock 42: shacq 14120 exacq 27304 blk 2904\nPID 15965 lwlock 43: shacq 14007 exacq 27129 blk 2740\nPID 15965 lwlock 44: shacq 13948 exacq 27027 blk 2616\nPID 15965 lwlock 45: shacq 14041 exacq 27198 blk 2431\nPID 15965 lwlock 46: shacq 14067 exacq 27277 blk 2345\nPID 15965 lwlock 47: shacq 14050 exacq 27203 blk 2106\nPID 15965 lwlock 48: shacq 13910 exacq 26910 blk 2155\nPID 15965 lwlock 49: shacq 14170 exacq 27360 blk 1989\n\nAfter we increased NUM_BUFFER_PARTITIONS to 1024, lock contention is\ndecreased:\n...\nPID 25220 lwlock 1000: shacq 247 exacq 494 blk 1\nPID 25220 lwlock 1001: shacq 198 exacq 394 blk 1\nPID 25220 lwlock 1002: shacq 203 exacq 404 blk 1\nPID 25220 lwlock 1003: shacq 226 exacq 452 blk 1\nPID 25220 lwlock 1004: shacq 235 exacq 470 blk 1\nPID 25220 lwlock 1006: shacq 226 exacq 452 blk 2\nPID 25220 lwlock 1007: shacq 214 exacq 428 blk 1\nPID 25220 lwlock 1008: shacq 225 exacq 448 blk 1\nPID 25220 lwlock 1010: shacq 209 exacq 418 blk 1\nPID 25220 lwlock 1015: shacq 199 exacq 398 blk 1\nPID 25220 lwlock 1016: shacq 214 exacq 426 blk 1\nPID 25220 lwlock 1018: shacq 230 exacq 456 blk 1\nPID 25220 lwlock 1019: shacq 222 exacq 444 blk 3\nPID 25220 lwlock 1023: shacq 262 exacq 524 blk 1\nPID 25220 lwlock 1027: shacq 213 exacq 426 blk 1\nPID 25220 lwlock 1028: shacq 246 exacq 491 blk 1\nPID 25220 lwlock 1029: shacq 226 exacq 452 blk 1\n\n> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is> disappeared for 8 core machines and come back with 16 core machines on> Amazon EC2. Would it be related with PostgreSQL locking mechanism?\nIf we build with -DLWLOCK_STATS to print locking stats from PostgreSQL, we see tons of contention with default value of NUM_BUFFER_PARTITIONS which is 16:$ tail -f /tmp/logfile | grep lwlock | egrep -v \"blk 0\"\n...PID 15965 lwlock 0: shacq 0 exacq 33 blk 2PID 15965 lwlock 34: shacq 14010 exacq 27134 blk 6192PID 15965 lwlock 35: shacq 14159 exacq 27397 blk 5426PID 15965 lwlock 36: shacq 14111 exacq 27322 blk 4959\nPID 15965 lwlock 37: shacq 14211 exacq 27507 blk 4370PID 15965 lwlock 38: shacq 14110 exacq 27294 blk 3980PID 15965 lwlock 39: shacq 13962 exacq 27027 blk 3719PID 15965 lwlock 40: shacq 14023 exacq 27156 blk 3273\nPID 15965 lwlock 41: shacq 14107 exacq 27309 blk 3201PID 15965 lwlock 42: shacq 14120 exacq 27304 blk 2904PID 15965 lwlock 43: shacq 14007 exacq 27129 blk 2740PID 15965 lwlock 44: shacq 13948 exacq 27027 blk 2616\nPID 15965 lwlock 45: shacq 14041 exacq 27198 blk 2431PID 15965 lwlock 46: shacq 14067 exacq 27277 blk 2345PID 15965 lwlock 47: shacq 14050 exacq 27203 blk 2106PID 15965 lwlock 48: shacq 13910 exacq 26910 blk 2155\nPID 15965 lwlock 49: shacq 14170 exacq 27360 blk 1989After we increased NUM_BUFFER_PARTITIONS to 1024, lock contention is decreased:...PID 25220 lwlock 1000: shacq 247 exacq 494 blk 1\nPID 25220 lwlock 1001: shacq 198 exacq 394 blk 1PID 25220 lwlock 1002: shacq 203 exacq 404 blk 1PID 25220 lwlock 1003: shacq 226 exacq 452 blk 1PID 25220 lwlock 1004: shacq 235 exacq 470 blk 1\nPID 25220 lwlock 1006: shacq 226 exacq 452 blk 2PID 25220 lwlock 1007: shacq 214 exacq 428 blk 1PID 25220 lwlock 1008: shacq 225 exacq 448 blk 1PID 25220 lwlock 1010: shacq 209 exacq 418 blk 1\nPID 25220 lwlock 1015: shacq 199 exacq 398 blk 1PID 25220 lwlock 1016: shacq 214 exacq 426 blk 1PID 25220 lwlock 1018: shacq 230 exacq 456 blk 1PID 25220 lwlock 1019: shacq 222 exacq 444 blk 3\nPID 25220 lwlock 1023: shacq 262 exacq 524 blk 1PID 25220 lwlock 1027: shacq 213 exacq 426 blk 1PID 25220 lwlock 1028: shacq 246 exacq 491 blk 1PID 25220 lwlock 1029: shacq 226 exacq 452 blk 1", "msg_date": "Thu, 5 Dec 2013 11:15:20 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On 2013-12-05 11:15:20 +0200, Metin Doslu wrote:\n> > - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is\n> > disappeared for 8 core machines and come back with 16 core machines on\n> > Amazon EC2. Would it be related with PostgreSQL locking mechanism?\n> \n> If we build with -DLWLOCK_STATS to print locking stats from PostgreSQL, we\n> see tons of contention with default value of NUM_BUFFER_PARTITIONS which is\n> 16:\n\nIs your workload bigger than RAM? I think a good bit of the contention\nyou're seeing in that listing is populating shared_buffers - and might\nactually vanish once you're halfway cached.\n From what I've seen so far the bigger problem than contention in the\nlwlocks itself, is the spinlock protecting the lwlocks...\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 10:18:41 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> Is your workload bigger than RAM?\n\nRAM is bigger than workload (more than a couple of times).\n\n> I think a good bit of the contention\n> you're seeing in that listing is populating shared_buffers - and might\n> actually vanish once you're halfway cached.\n> From what I've seen so far the bigger problem than contention in the\n> lwlocks itself, is the spinlock protecting the lwlocks...\n\nCould you clarify a bit what do you mean by \"halfway cached\" and \"spinlock\nprotecting the lwlocks\".\n\n> Is your workload bigger than RAM?RAM is bigger than workload (more than a couple of times).\n> I think a good bit of the contention> you're seeing in that listing is populating shared_buffers - and might\n> actually vanish once you're halfway cached.> From what I've seen so far the bigger problem than contention in the> lwlocks itself, is the spinlock protecting the lwlocks...\nCould you clarify a bit what do you mean by \"halfway cached\" and \"spinlock protecting the lwlocks\".", "msg_date": "Thu, 5 Dec 2013 11:33:29 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On 2013-12-05 11:33:29 +0200, Metin Doslu wrote:\n> > Is your workload bigger than RAM?\n> \n> RAM is bigger than workload (more than a couple of times).\n\n> > I think a good bit of the contention\n> > you're seeing in that listing is populating shared_buffers - and might\n> > actually vanish once you're halfway cached.\n> > From what I've seen so far the bigger problem than contention in the\n> > lwlocks itself, is the spinlock protecting the lwlocks...\n> \n> Could you clarify a bit what do you mean by \"halfway cached\"\n\nWell, your stats showed a) fairly low lock counts overall b) a high\npercentage of exclusive locks.\na) indicates the system wasn't running long.\nb) tells me there were lots of changes to the buffer mapping - which\n basically only happens if a buffer is placed or removed from\n shared-buffers.\n\nIf your shared_buffers is big enough to contain most of the data you\nshouldn't see many exclusive locks in comparison to the number of shared\nlocks.\n\n> and \"spinlock protecting the lwlocks\".\n\nEvery LWLock has an internal spinlock to protect its state. So whenever\nsomebody does a LWLockAcquire()/Release(), even if only in shared mode,\nwe currently acquire that spinlock, manipulate the LWLocks state, and\nrelease the spinlock again. In lots of workloads that internal spinlock\nis the contention point, not the lenght over which the lwlock is\nheld. Especially when they are mostly held in shared mode.\n\nMakes sense?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 10:42:26 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> You could try my lwlock-scalability improvement patches - for some\n> workloads here, the improvements have been rather noticeable. Which\n> version are you testing?\n\nI tried your patches on next link. As you suspect I didn't see any\nimprovements. I tested it on PostgreSQL 9.2 Stable.\n\nhttp://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/REL9_2_STABLE-rwlock-contention\n\n\nOn Wed, Dec 4, 2013 at 8:26 PM, Andres Freund <[email protected]>wrote:\n\n> On 2013-12-04 20:19:55 +0200, Metin Doslu wrote:\n> > - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is\n> > disappeared for 8 core machines and come back with 16 core machines on\n> > Amazon EC2. Would it be related with PostgreSQL locking mechanism?\n>\n> You could try my lwlock-scalability improvement patches - for some\n> workloads here, the improvements have been rather noticeable. Which\n> version are you testing?\n>\n> Greetings,\n>\n> Andres Freund\n>\n> --\n> Andres Freund http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\n> You could try my lwlock-scalability improvement patches - for some> workloads here, the improvements have been rather noticeable. Which\n> version are you testing?I tried your patches on next link. As you suspect I didn't see any improvements. I tested it on PostgreSQL 9.2 Stable.\nhttp://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/REL9_2_STABLE-rwlock-contention\nOn Wed, Dec 4, 2013 at 8:26 PM, Andres Freund <[email protected]> wrote:\nOn 2013-12-04 20:19:55 +0200, Metin Doslu wrote:\n> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is\n> disappeared for 8 core machines and come back with 16 core machines on\n> Amazon EC2. Would it be related with PostgreSQL locking mechanism?\n\nYou could try my lwlock-scalability improvement patches - for some\nworkloads here, the improvements have been rather noticeable. Which\nversion are you testing?\n\nGreetings,\n\nAndres Freund\n\n--\n Andres Freund                     http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 5 Dec 2013 17:46:44 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Parallel Select query performance and shared buffers" }, { "msg_contents": "On 2013-12-05 17:46:44 +0200, Metin Doslu wrote:\n> I tried your patches on next link. As you suspect I didn't see any\n> improvements. I tested it on PostgreSQL 9.2 Stable.\n\nYou tested the correct branch, right? Which commit does \"git rev-parse\nHEAD\" show?\n\nBut generally, as long as your profile hides all the important\ninformation behind the hypervisor's cost, you're going to have a hard\ntime analyzing the problems. You really should try to reproduce the\nproblems on native hardware (as similar to the host hardware as\npossible), to get accurate data. On CPU bound workloads that information\nis often transportable to the virtual world.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 16:52:46 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Parallel Select query performance and shared buffers" }, { "msg_contents": "> You tested the correct branch, right? Which commit does \"git rev-parse\n> HEAD\" show?\n\nI applied last two patches manually on PostgreSQL 9.2 Stable.\n\n> You tested the correct branch, right? Which commit does \"git rev-parse\n> HEAD\" show?I applied last two patches manually on PostgreSQL 9.2 Stable.", "msg_date": "Thu, 5 Dec 2013 17:57:44 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> From what I've seen so far the bigger problem than contention in the\n> lwlocks itself, is the spinlock protecting the lwlocks...\n\nPostgres 9.3.1 also reports spindelay, it seems that there is no contention\non spinlocks.\n\nPID 21121 lwlock 0: shacq 0 exacq 33 blk 1 spindelay 0\nPID 21121 lwlock 33: shacq 7602 exacq 14688 blk 4381 spindelay 0\nPID 21121 lwlock 34: shacq 7826 exacq 15113 blk 3786 spindelay 0\nPID 21121 lwlock 35: shacq 7792 exacq 15110 blk 3356 spindelay 0\nPID 21121 lwlock 36: shacq 7803 exacq 15125 blk 3075 spindelay 0\nPID 21121 lwlock 37: shacq 7822 exacq 15177 blk 2756 spindelay 0\nPID 21121 lwlock 38: shacq 7694 exacq 14863 blk 2513 spindelay 0\nPID 21121 lwlock 39: shacq 7914 exacq 15320 blk 2400 spindelay 0\nPID 21121 lwlock 40: shacq 7855 exacq 15203 blk 2220 spindelay 0\nPID 21121 lwlock 41: shacq 7942 exacq 15363 blk 1996 spindelay 0\nPID 21121 lwlock 42: shacq 7828 exacq 15115 blk 1872 spindelay 0\nPID 21121 lwlock 43: shacq 7820 exacq 15159 blk 1833 spindelay 0\nPID 21121 lwlock 44: shacq 7709 exacq 14916 blk 1590 spindelay 0\nPID 21121 lwlock 45: shacq 7831 exacq 15134 blk 1619 spindelay 0\nPID 21121 lwlock 46: shacq 7744 exacq 14989 blk 1559 spindelay 0\nPID 21121 lwlock 47: shacq 7808 exacq 15111 blk 1473 spindelay 0\nPID 21121 lwlock 48: shacq 7729 exacq 14929 blk 1381 spindelay 0\n\n> From what I've seen so far the bigger problem than contention in the> lwlocks itself, is the spinlock protecting the lwlocks...Postgres 9.3.1 also reports spindelay, it seems that there is no contention on spinlocks.\nPID 21121 lwlock 0: shacq 0 exacq 33 blk 1 spindelay 0PID 21121 lwlock 33: shacq 7602 exacq 14688 blk 4381 spindelay 0PID 21121 lwlock 34: shacq 7826 exacq 15113 blk 3786 spindelay 0\nPID 21121 lwlock 35: shacq 7792 exacq 15110 blk 3356 spindelay 0PID 21121 lwlock 36: shacq 7803 exacq 15125 blk 3075 spindelay 0PID 21121 lwlock 37: shacq 7822 exacq 15177 blk 2756 spindelay 0\nPID 21121 lwlock 38: shacq 7694 exacq 14863 blk 2513 spindelay 0PID 21121 lwlock 39: shacq 7914 exacq 15320 blk 2400 spindelay 0PID 21121 lwlock 40: shacq 7855 exacq 15203 blk 2220 spindelay 0\nPID 21121 lwlock 41: shacq 7942 exacq 15363 blk 1996 spindelay 0PID 21121 lwlock 42: shacq 7828 exacq 15115 blk 1872 spindelay 0PID 21121 lwlock 43: shacq 7820 exacq 15159 blk 1833 spindelay 0\nPID 21121 lwlock 44: shacq 7709 exacq 14916 blk 1590 spindelay 0PID 21121 lwlock 45: shacq 7831 exacq 15134 blk 1619 spindelay 0PID 21121 lwlock 46: shacq 7744 exacq 14989 blk 1559 spindelay 0\nPID 21121 lwlock 47: shacq 7808 exacq 15111 blk 1473 spindelay 0PID 21121 lwlock 48: shacq 7729 exacq 14929 blk 1381 spindelay 0", "msg_date": "Thu, 5 Dec 2013 18:03:16 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Thu, Dec 5, 2013 at 1:03 PM, Metin Doslu <[email protected]> wrote:\n>> From what I've seen so far the bigger problem than contention in the\n>> lwlocks itself, is the spinlock protecting the lwlocks...\n>\n> Postgres 9.3.1 also reports spindelay, it seems that there is no contention\n> on spinlocks.\n\n\nDid you check hugepages?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 16:13:51 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" } ]
[ { "msg_contents": "We have several independent tables on a multi-core machine serving Select\nqueries. These tables fit into memory; and each Select queries goes over\none table's pages sequentially. In this experiment, there are no indexes or\ntable joins.\n\nWhen we send concurrent Select queries to these tables, query performance\ndoesn't scale out with the number of CPU cores. We find that complex Select\nqueries scale out better than simpler ones. We also find that increasing\nthe block size from 8 KB to 32 KB, or increasing shared_buffers to include\nthe working set mitigates the problem to some extent.\n\nFor our experiments, we chose an 8-core machine with 68 GB of memory from\nAmazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and\nset shared_buffers to 4 GB.\n\nWe then generated 1, 2, 4, and 8 separate tables using the data generator\nfrom the industry standard TPC-H benchmark. Each table we generated, called\nlineitem-1, lineitem-2, etc., had about 750 MB of data. Next, we sent 1, 2,\n4, and 8 concurrent Select queries to these tables to observe the scale out\nbehavior. Our expectation was that since this machine had 8 cores, our run\ntimes would stay constant all throughout. Also, we would have expected the\nmachine's CPU utilization to go up to 100% at 8 concurrent queries. Neither\nof those assumptions held true.\n\nWe found that query run times degraded as we increased the number of\nconcurrent Select queries. Also, CPU utilization flattened out at less than\n50% for the simpler queries. Full results with block size of 8KB are below:\n\n Table select count(*) TPC-H Simple (#6)[2]\n TPC-H Complex (#1)[1]\n1 Table / 1 query 1.5 s 2.5 s\n 8.4 s\n2 Tables / 2 queries 1.5 s 2.5 s\n 8.4 s\n4 Tables / 4 queries 2.0 s 2.9 s\n 8.8 s\n8 Tables / 8 queries 3.3 s 4.0 s\n 9.6 s\n\nWe then increased the block size (BLCKSZ) from 8 KB to 32 KB and recompiled\nPostgreSQL. This change had a positive impact on query completion times.\nHere are the new results with block size of 32 KB:\n\n Table select count(*) TPC-H Simple (#6)[2]\n TPC-H Complex (#1)[1]\n1 Table / 1 query 1.5 s 2.3 s\n 8.0 s\n2 Tables / 2 queries 1.5 s 2.3 s\n 8.0 s\n4 Tables / 4 queries 1.6 s 2.4 s\n 8.1 s\n8 Tables / 8 queries 1.8 s 2.7 s\n 8.3 s\n\nAs a quick side, we also repeated the same experiment on an EC2 instance\nwith 16 CPU cores, and found that the scale out behavior became worse\nthere. (We also tried increasing the shared_buffers to 30 GB. This change\ncompletely solved the scaling out problem on this instance type, but hurt\nour performance on the hi1.4xlarge instances.)\n\nUnfortunately, increasing the block size from 8 to 32 KB has other\nimplications for some of our customers. Could you help us out with the\nproblem here?\n\nWhat can we do to identify the problem's root cause? Can we work around it?\n\nThank you,\nMetin\n\n[1] http://examples.citusdata.com/tpch_queries.html#query-1\n[2] http://examples.citusdata.com/tpch_queries.html#query-6\n\nWe have several independent tables on a multi-core machine serving Select queries. These tables fit into memory; and each Select queries goes over one table's pages sequentially. In this experiment, there are no indexes or table joins.\nWhen we send concurrent Select queries to these tables, query performance doesn't scale out with the number of CPU cores. We find that complex Select queries scale out better than simpler ones. We also find that increasing the block size from 8 KB to 32 KB, or increasing shared_buffers to include the working set mitigates the problem to some extent.\nFor our experiments, we chose an 8-core machine with 68 GB of memory from Amazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and set shared_buffers to 4 GB.\nWe then generated 1, 2, 4, and 8 separate tables using the data generator from the industry standard TPC-H benchmark. Each table we generated, called lineitem-1, lineitem-2, etc., had about 750 MB of data. Next, we sent 1, 2, 4, and 8 concurrent Select queries to these tables to observe the scale out behavior. Our expectation was that since this machine had 8 cores, our run times would stay constant all throughout. Also, we would have expected the machine's CPU utilization to go up to 100% at 8 concurrent queries. Neither of those assumptions held true.\nWe found that query run times degraded as we increased the number of concurrent Select queries. Also, CPU utilization flattened out at less than 50% for the simpler queries. Full results with block size of 8KB are below:\n                         Table select count(*)    TPC-H Simple (#6)[2]    TPC-H Complex (#1)[1]1 Table  / 1 query               1.5 s                    2.5 s                  8.4 s\n2 Tables / 2 queries             1.5 s                    2.5 s                  8.4 s4 Tables / 4 queries             2.0 s                    2.9 s                  8.8 s\n8 Tables / 8 queries             3.3 s                    4.0 s                  9.6 sWe then increased the block size (BLCKSZ) from 8 KB to 32 KB and recompiled PostgreSQL. This change had a positive impact on query completion times. Here are the new results with block size of 32 KB:\n                         Table select count(*)    TPC-H Simple (#6)[2]    TPC-H Complex (#1)[1]1 Table  / 1 query               1.5 s                    2.3 s                  8.0 s\n2 Tables / 2 queries             1.5 s                    2.3 s                  8.0 s4 Tables / 4 queries             1.6 s                    2.4 s                  8.1 s\n8 Tables / 8 queries             1.8 s                    2.7 s                  8.3 sAs a quick side, we also repeated the same experiment on an EC2 instance with 16 CPU cores, and found that the scale out behavior became worse there. (We also tried increasing the shared_buffers to 30 GB. This change completely solved the scaling out problem on this instance type, but hurt our performance on the hi1.4xlarge instances.)\nUnfortunately, increasing the block size from 8 to 32 KB has other implications for some of our customers. Could you help us out with the problem here?\nWhat can we do to identify the problem's root cause? Can we work around it?\nThank you,Metin[1] http://examples.citusdata.com/tpch_queries.html#query-1\n[2] http://examples.citusdata.com/tpch_queries.html#query-6", "msg_date": "Tue, 3 Dec 2013 15:49:07 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel Select query performance and shared buffers" }, { "msg_contents": "Metin Doslu wrote:\n\n> When we send concurrent Select queries to these tables, query performance\n> doesn't scale out with the number of CPU cores. We find that complex Select\n> queries scale out better than simpler ones. We also find that increasing\n> the block size from 8 KB to 32 KB, or increasing shared_buffers to include\n> the working set mitigates the problem to some extent.\n\nMaybe you could help test this patch:\nhttp://www.postgresql.org/message-id/[email protected]\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Dec 2013 10:53:23 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Tue, Dec 3, 2013 at 10:49 AM, Metin Doslu <[email protected]> wrote:\n> We have several independent tables on a multi-core machine serving Select\n> queries. These tables fit into memory; and each Select queries goes over one\n> table's pages sequentially. In this experiment, there are no indexes or\n> table joins.\n>\n> When we send concurrent Select queries to these tables, query performance\n> doesn't scale out with the number of CPU cores. We find that complex Select\n> queries scale out better than simpler ones. We also find that increasing the\n> block size from 8 KB to 32 KB, or increasing shared_buffers to include the\n> working set mitigates the problem to some extent.\n>\n> For our experiments, we chose an 8-core machine with 68 GB of memory from\n> Amazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and set\n> shared_buffers to 4 GB.\n\n\nIf you are certain your tables fit in RAM, you may want to disable\nsynchronized sequential scans, as they will create contention between\nthe threads.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Dec 2013 13:56:11 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "Looking into syncscan.c, it says in comments:\n\n\"When multiple backends run a sequential scan on the same table, we try to\nkeep them synchronized to reduce the overall I/O needed.\"\n\nBut in my workload, every process was running on a different table.\n\n\n\n\nOn Tue, Dec 3, 2013 at 5:56 PM, Claudio Freire <[email protected]>wrote:\n\n> On Tue, Dec 3, 2013 at 10:49 AM, Metin Doslu <[email protected]> wrote:\n> > We have several independent tables on a multi-core machine serving Select\n> > queries. These tables fit into memory; and each Select queries goes over\n> one\n> > table's pages sequentially. In this experiment, there are no indexes or\n> > table joins.\n> >\n> > When we send concurrent Select queries to these tables, query performance\n> > doesn't scale out with the number of CPU cores. We find that complex\n> Select\n> > queries scale out better than simpler ones. We also find that increasing\n> the\n> > block size from 8 KB to 32 KB, or increasing shared_buffers to include\n> the\n> > working set mitigates the problem to some extent.\n> >\n> > For our experiments, we chose an 8-core machine with 68 GB of memory from\n> > Amazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and\n> set\n> > shared_buffers to 4 GB.\n>\n>\n> If you are certain your tables fit in RAM, you may want to disable\n> synchronized sequential scans, as they will create contention between\n> the threads.\n>\n\nLooking into syncscan.c, it says in comments:\"When multiple backends run a sequential scan on the same table, we try to keep them synchronized to reduce the overall I/O needed.\"\nBut in my workload, every process was running on a different table.On Tue, Dec 3, 2013 at 5:56 PM, Claudio Freire <[email protected]> wrote:\nOn Tue, Dec 3, 2013 at 10:49 AM, Metin Doslu <[email protected]> wrote:\n\n> We have several independent tables on a multi-core machine serving Select\n> queries. These tables fit into memory; and each Select queries goes over one\n> table's pages sequentially. In this experiment, there are no indexes or\n> table joins.\n>\n> When we send concurrent Select queries to these tables, query performance\n> doesn't scale out with the number of CPU cores. We find that complex Select\n> queries scale out better than simpler ones. We also find that increasing the\n> block size from 8 KB to 32 KB, or increasing shared_buffers to include the\n> working set mitigates the problem to some extent.\n>\n> For our experiments, we chose an 8-core machine with 68 GB of memory from\n> Amazon's EC2 service. We installed PostgreSQL 9.3.1 on the instance, and set\n> shared_buffers to 4 GB.\n\n\nIf you are certain your tables fit in RAM, you may want to disable\nsynchronized sequential scans, as they will create contention between\nthe threads.", "msg_date": "Tue, 3 Dec 2013 18:24:55 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "On Tue, Dec 3, 2013 at 1:24 PM, Metin Doslu <[email protected]> wrote:\n> Looking into syncscan.c, it says in comments:\n>\n> \"When multiple backends run a sequential scan on the same table, we try to\n> keep them synchronized to reduce the overall I/O needed.\"\n>\n> But in my workload, every process was running on a different table.\n\nAh, ok, so that's what you meant by \"independent tables\".\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 3 Dec 2013 14:32:47 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Select query performance and shared buffers" }, { "msg_contents": "> Maybe you could help test this patch:\n>\nhttp://www.postgresql.org/message-id/[email protected]\n\nWhich repository should I apply these patches. I tried main repository, 9.3\nstable and source code of 9.3.1, and in my trials at least of one the\npatches is failed. What patch command should I use?\n\n> Maybe you could help test this patch:> http://www.postgresql.org/message-id/[email protected]\nWhich repository should I apply these patches. I tried main repository, 9.3 stable and source code of 9.3.1, and in my trials at least of one the patches is failed. What patch command should I use?", "msg_date": "Wed, 4 Dec 2013 15:19:29 +0200", "msg_from": "Metin Doslu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Parallel Select query performance and shared buffers" } ]
[ { "msg_contents": "Hello,\n\nWe are starting a new project to deploy a solution in cloud with the possibility to be used for 2.000+ clients. Each of this clients will use several tables to store their information (our model has about 500+ tables but there's less than 100 core table with heavy use). Also the projected ammout of information per client could be from small (few hundreds tuples/MB) to huge (few millions tuples/GB).\n\nOne of the many questions we have is about performance of the db if we work with only one (using a ClientID to separete de clients info) or thousands of separate dbs. The management of the dbs is not a huge concert as we have an automated tool.\n\nAt Google there's lots of cases about this subject but none have a scenario that matchs with the one I presented above, so I would like to know if anyone here has a similar situation or knowledgement and could share some thoughts.\n\n\nThanks\n\nMax\nHello,We are starting a new project to deploy a solution in cloud with the possibility to be used for 2.000+ clients. Each of this clients will use several tables to store their information (our model has about 500+ tables but there's less than 100 core table with heavy use). Also the projected ammout of information per client could be from small (few hundreds tuples/MB) to huge (few millions tuples/GB).One of the many questions we have is about performance of the db if we work with only one (using a ClientID to separete de clients info) or thousands of separate dbs. The management of the dbs is not a huge concert as we have an automated tool.At Google there's lots of cases about this subject but none have a scenario that matchs with the one I presented above, so I would like to know if anyone here has a similar situation or knowledgement and could share some thoughts.ThanksMax", "msg_date": "Thu, 5 Dec 2013 02:42:10 -0800 (PST)", "msg_from": "Max <[email protected]>", "msg_from_op": true, "msg_subject": "One huge db vs many small dbs" }, { "msg_contents": "On Thu, Dec 5, 2013 at 2:42 AM, Max <[email protected]> wrote:\n\n> We are starting a new project to deploy a solution in cloud with the\n> possibility to be used for 2.000+ clients. Each of this clients will use\n> several tables to store their information (our model has about 500+ tables\n> but there's less than 100 core table with heavy use). Also the projected\n> ammout of information per client could be from small (few hundreds tuples/MB)\n> to huge (few millions tuples/GB).\n>\n> One of the many questions we have is about performance of the db if we\n> work with only one (using a ClientID to separete de clients info) or\n> thousands of separate dbs. The management of the dbs is not a huge\n> concert as we have an automated tool.\n>\n\nMore details would be helpful, some of which could include:\nhow much memory is dedicated to Postgresql,\nhow many servers,\nare you using replication/hot standby,\nwhat are you data access patterns like (mostly inserts/lots of concurrent\nqueries, a handful of users versus hundreds querying at the same time),\nwhat are your plans for backups,\nwhat are you planning to do to archive older data?\nAlso, have you considered separate schemas rather than separate databases?\n\nOn Thu, Dec 5, 2013 at 2:42 AM, Max <[email protected]> wrote:\n\n\nWe are starting a new project to deploy a solution in cloud with the possibility to be used for 2.000+ clients. Each of this clients will use several tables to store their information (our model has about 500+ tables but there's less than 100 core table with heavy use). Also the projected ammout of information per client could be from small (few hundreds tuples/MB) to huge (few millions tuples/GB).\n\nOne of the many questions we have is about performance of the db if we work with only one (using a ClientID to separete de clients info) or thousands of separate dbs. The management of the dbs is not a huge concert as we have an automated tool.\nMore details would be helpful, some of which could include:how much memory is dedicated to Postgresql,how many servers,are you using replication/hot standby,\nwhat are you data access patterns like (mostly inserts/lots of concurrent queries, a handful of users versus hundreds querying at the same time),what are your plans for backups,what are you planning to do to archive older data?\nAlso, have you considered separate schemas rather than separate databases?", "msg_date": "Thu, 5 Dec 2013 06:28:41 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One huge db vs many small dbs" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 12/05/2013 02:42 AM, Max wrote:\n> Hello,\n> \n> We are starting a new project to deploy a solution in cloud with\n> the possibility to be used for 2.000+ clients. Each of this clients\n> will use several tables to store their information (our model has\n> about 500+ tables but there's less than 100 core table with heavy\n> use). Also the projected ammout of information per client could be\n> from small (few hundreds tuples/MB) to huge (few millions\n> tuples/GB).\n> \n> One of the many questions we have is about performance of the db if\n> we work with only one (using a ClientID to separete de clients\n> info) or thousands of separate dbs. The management of the dbs is\n> not a huge concert as we have an automated tool.\n\nIf I understand correctly: 500 tables x 2000 = 1 million tables\n\nEven if not heavily used, in my experience 1 million tables in a\nsingle database will cause problems for you:\n1) on Postgres versions < 9.3, pg_dump takes *long* time (think days)\n2) psql tab complete really slow\n3) probably others I'm not thinking of right now...\n\nThere are advantages to not needing to manage so many databases, but I\nwould test it carefully before committing.\n\nJoe\n\n- -- \nJoe Conway\ncredativ LLC: http://www.credativ.us\nLinux, PostgreSQL, and general Open Source\nTraining, Service, Consulting, & 24x7 Support\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.12 (GNU/Linux)\nComment: Using GnuPG with Thunderbird - http://www.enigmail.net/\n\niQIcBAEBAgAGBQJSoKJ9AAoJEDfy90M199hlSlgP/10lk4HZ3lga1RMMtzAlzYul\n92NIS1MIDQLb/Uo6DPsbchh9aAU1MZjuC0fuTwOAAjfXMgyKO9AbEgbkf1PlLn1R\nLrG/pOdzBEJp67fIqWckBwMKzE8RjetQnyDykkW893xgRE4woyMtPdk1ywPT1iFK\nIX9HgzTEhnHH4FSkFcxRtqWmgJX5eigKEXfC8wLE8//8VJye0Ej0wS04PXPkkKvM\nDBOJ8ba9A853nl4F4l26jmoJ6iiMJqsxHYJsJMX45tFDsyuvf4E4r9y9CHbXlEw0\n1o/DTLHqKK2uDniz3pVnCuqHxtPr0IoD7imkh5gGgi40VKBzpCzfNg9NQMw02OL2\nwpvJJeWynKwny/3BTN0ZW5mLb1iP1PLZRsr1ivwbVRUARfYoShWRB1fMruuXSvV4\nA7hO4tGDCrvB/R2BxS0/ssLvO9vxX+sHTleAP4Uoz2kv5MBuJRRZsFlb8ejOB3gg\niWb4QJOh93NVJgW6M2y496d8Zoz2Vq2o8QUOOzh49QmQjQE3tyXgsO4VmrpUxwHg\nzK0d+Qlkua9U433+dNQBs2i4mf1K58LJ0uQde2ibULk6Tgq+uJePmWfzKPhkwamV\n1d3Iu7UgE5JigzmdWJy4GdJiVGLsTdOtGFHJhMEIFYZ/pHF8WoAtlx6D1SkaCNDr\nIiR6V5n+xDuuPkQcDBp0\n=FGZi\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 05 Dec 2013 07:57:49 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One huge db vs many small dbs" }, { "msg_contents": "maxabbr wrote\n> Hello,\n> \n> We are starting a new project to deploy a solution in cloud with the\n> possibility to be used for 2.000+ clients. Each of this clients will use\n> several tables to store their information (our model has about 500+ tables\n> but there's less than 100 core table with heavy use). Also the projected\n> ammout of information per client could be from small (few hundreds\n> tuples/MB) to huge (few millions tuples/GB).\n> \n> One of the many questions we have is about performance of the db if we\n> work with only one (using a ClientID to separete de clients info) or\n> thousands of separate dbs. The management of the dbs is not a huge concert\n> as we have an automated tool.\n> \n> At Google there's lots of cases about this subject but none have a\n> scenario that matchs with the one I presented above, so I would like to\n> know if anyone here has a similar situation or knowledgement and could\n> share some thoughts.\n> \n> \n> Thanks\n> \n> Max\n\nMy untested thoughts here is a hybrid approach. Allow any one database to\ncontain any number of stores on a common schema with a controlling clientId\ncolumn. But allow for multiple databases. Furthermore, any non-client\nshared data could belong to a separate database of reference with the\npossibility of caching said data in each of the client databases where\napplicable.\n\nThough until your needs dictate that level of complexity you can have just\none data and schema set for all clients.\n\nWhile row-level-security will make this more tenable generally this model\nworks best if all client access is made via middleware. You mitigate that\nby using separate databases for any clients with a higher risk profile\n(i.e., larger datasets, direct access to the DB, etc...)\n\nAdding in clientId overhead will degrade performance somewhat but increase\nyour flexibility considerably. That is often a worthwhile trade-off to make\neven if you decided to create separate schemas/databases.\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/One-huge-db-vs-many-small-dbs-tp5781827p5781924.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 08:14:39 -0800 (PST)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One huge db vs many small dbs" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Max\n> Sent: Thursday, December 05, 2013 5:42 AM\n> To: [email protected]\n> Subject: [PERFORM] One huge db vs many small dbs\n> \n> Hello,\n> \n> \n> We are starting a new project to deploy a solution in cloud with the possibility\n> to be used for 2.000+ clients. Each of this clients will use several tables to\n> store their information (our model has about 500+ tables but there's less\n> than 100 core table with heavy use). Also the projected ammout of\n> information per client could be from small (few hundreds tuples/MB) to\n> huge (few millions tuples/GB).\n> \n> \n> One of the many questions we have is about performance of the db if we\n> work with only one (using a ClientID to separete de clients info) or thousands\n> of separate dbs. The management of the dbs is not a huge concert as we\n> have an automated tool.\n\nIf you are planning on using persisted connections, the large number of DB approach is going to have a significant disadvantage. You cannot pool connections between databases. So if you have 2000 databases, you are going to need a minimum of 2000 connections to service those database (assuming you want to keep at least one active connection open per client at a time).\n\nBrad.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 21:37:38 +0000", "msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One huge db vs many small dbs" }, { "msg_contents": "\n\n>> One of the many questions we have is about performance of the db if we\n>> work with only one (using a ClientID to separete de clients info) or thousands\n>> of separate dbs. The management of the dbs is not a huge concert as we\n>> have an automated tool.\n>\n> If you are planning on using persisted connections, the large number of DB approach is going to have a significant disadvantage. You cannot pool connections between databases. So if you have 2000 databases, you are going to need a minimum of 2000 connections to service those database (assuming you want to keep at least one active connection open per client at a time).\n\nThat isn't exactly true. You could run multiple poolers.\n\nJD\n\n>\n> Brad.\n>\n>\n>\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nFor my dreams of your image that blossoms\n a rose in the deeps of my heart. - W.B. Yeats\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 05 Dec 2013 13:53:51 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One huge db vs many small dbs" }, { "msg_contents": "On Thu, Dec 05, 2013 at 02:42:10AM -0800, Max wrote:\n> Hello,\n> \n> We are starting a new project to deploy a solution in cloud with the\n> possibility to be used for 2.000+ clients. Each of this clients will�use\n> several tables to store their information (our model has about 500+\n> tables but there's less than 100 core table with heavy use). Also the\n> projected ammout of information per client could be from small (few\n> hundreds tuples/MB) to huge (few millions tuples/GB).\n> \n> One of the many questions we have is about performance of the db if we\n> work with only one (using a ClientID to separete de clients info) or\n> thousands of separate dbs. The management�of the dbs is not a huge\n> concert as we have an automated tool.\n> \n> At Google there's lots of cases about this subject but none have a\n> scenario that�matchs with the one I presented above, so I would like to\n> know if anyone here has a similar situation or knowledgement and could\n> share some thoughts.\n\n\nWe have made very good experiences with putting each client into its own\ndatabase. We have a few thousand dbs now on 5 machines (each 1TB capacity)\nwhere each client/db is between 100MB and 100GB of data.\nAs Josh said you have to consider the db overhead. If you have only a few\nMBs of data per client it might not be worth it. (An empty DB shows up with\n6MB size in psql \\l+).\n\nThe good thing with a db per client is you can easily scale horizontically\nby just adding machines. We have between 100 and 1000 dbs per machine,\ndepending on client size. There's no real limit on growth regarding client\nnumbers, we can just always add more machines. We can also easily move\nclients between machines with pg_dump piped into pg_restore.\n\nI would not advise using one schema per client, because then you lose the\nability to really use schemas within each client 'namespace'. Afaik schemas\ncannot be stacked in Postgres. Schemas are very helpful to seperate\ndifferent applications or to implement versioning for complex\nviews/functions, so let's not waste them for partitioning.\n\nFurther things we learned:\n\n- \"CREATE DATABASE foo TEMPLATE bar\" is a nice way to cleanly create a new\npartition/client based on a template database.\n\n- On a very busy server (I/O wise) CREATE DATABASE can take a while to\ncomplete, due to the enforced CHECKPOINT when creating a new DB. We worked\naround this by creating empty dbs from the template beforehand, allocating\n(renaming) them on demand and periodically restocking those spare dbs.\n\n- pg_dump/pg_restore on individual client dbs is a neat way to implement\nbackup/restore. It allows you to backup all clients sequentially as well as\nconcurrently (especially from multiple machines) depending on your\nrequirements.\n\n- When partitioning into databases it's not trivial to reference data in\nother databases. E.g. you can't have foreign keys to your main db (where\nyou track all clients and their dbs). This could probably be worked around\nwith dblink / foreign data wrappers if necessary.\n\n- We just completed painless migration from 9.0 to 9.3 simply by installing\n9.3 next to 9.0 on all machines and selectively migrating individual client\ndbs with pg_dump | pg_restore over a period of 6 weeks. (We did not notice\nany problems btw).\n\n- Queries over the data of all clients (e.g. for internal monitoring or\nstatistics) naturally take a while as you'll have to connect to all\nindividual dbs and then manually aggregate the result from each one.\n\n- Schema changes are not trivial as you need to develop tools to apply them\nto all client dbs and template dbs (in our case). It gets tricky when the\nprocess in interrupted and there are race conditions when new dbs are\ncreated in the process that you need to protect against.\n\n- Deleting all data of an individual client is a simple as dropping the db.\n\n\nHope that helps.\n\nOliver\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 7 Dec 2013 13:43:18 +0100", "msg_from": "Oliver Seemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One huge db vs many small dbs" } ]
[ { "msg_contents": "Hello guys, \n\n\nWhen I excute a query,  the exection time is about 1 minute; however, when I execute the query with explain analyze the excution time jumps to 10 minutes. \n\nI have tried this for several queries, where  I need to optimize;  and using explain analyze leads alway to a huge time overhead in factor of 10.\n\n\nThis is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?\n\n\nRegards\nHello guys, When I excute a query,  the exection time is about 1 minute; however, when I execute the query with explain analyze the excution time jumps to 10 minutes. I have tried this for several queries, where  I need to optimize;  and using explain analyze leads alway to a huge time overhead in factor of 10.This is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?Regards", "msg_date": "Thu, 5 Dec 2013 06:09:00 -0800 (PST)", "msg_from": "salah jubeh <[email protected]>", "msg_from_op": true, "msg_subject": "Explain analyze time overhead" }, { "msg_contents": "On 05-12-13 15:09, salah jubeh wrote:\n>\n> Hello guys,\n>\n> When I excute a query, the exection time is about 1 minute; however, \n> when I execute the query with explain analyze the excution time jumps \n> to 10 minutes.\n> I have tried this for several queries, where I need to optimize; and \n> using explain analyze leads alway to a huge time overhead in factor of 10.\n>\n> This is a little bit starnge for me; did any one experience somthing \n> like this? Can I trust the generated plans?\n>\n> Regards\n\nExplain analyze does a lot more work than just explaining the query, it \nexcecutes it and takes not of how long things actually took, which \nitself takes time. Apparently on some machines, it can take much longer \nthan just executing the query would take.\n\n From the manual:\n\"In order to measure the run-time cost of each node in the execution \nplan, the current implementation ofEXPLAIN ANALYZEadds profiling \noverhead to query execution. As a result, runningEXPLAIN ANALYZEon a \nquery can sometimes take significantly longer than executing the query \nnormally. The amount of overhead depends on the nature of the query, as \nwell as the platform being used. The worst case occurs for plan nodes \nthat in themselves require very little time per execution, and on \nmachines that have relatively slow operating system calls for obtaining \nthe time of day.\"\n\n\n\n\n\n\n\n\n\nOn 05-12-13 15:09, salah jubeh wrote:\n\n\n\n\n\nHello\n guys, \n\n\n\nWhen I\n excute a query,  the exection time is about 1 minute; however,\n when I execute the query with explain analyze the excution\n time jumps to 10 minutes. \n\nI have\n tried this for several queries, where  I need to optimize; \n and using explain analyze leads alway to a huge time overhead\n in factor of 10.\n\n\n\nThis is a\n little bit starnge for me; did any one experience somthing\n like this? Can I trust the generated plans?\n\n\n\nRegards\n\n\n\n Explain analyze does a lot more work than just explaining the query,\n it excecutes it and takes not of how long things actually took,\n which itself takes time. Apparently on some machines, it can take\n much longer than just executing the query would take.\n\n From the manual:\n\n\"In order to measure the run-time cost of each node\n in the execution plan, the current implementation of EXPLAIN ANALYZE adds\n profiling overhead to query execution. As a result, running EXPLAIN ANALYZE on a\n query can sometimes take significantly longer than executing the\n query normally. The amount of overhead depends on the nature of\n the query, as well as the platform being used. The worst case\n occurs for plan nodes that in themselves require very little time\n per execution, and on machines that have relatively slow operating\n system calls for obtaining the time of day.\"", "msg_date": "Thu, 05 Dec 2013 15:21:20 +0100", "msg_from": "vincent elschot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain analyze time overhead" }, { "msg_contents": "salah jubeh <[email protected]> writes:\n> When I excute a query,� the exection time is about 1 minute; however, when I execute the query with explain analyze the excution time jumps to 10 minutes. \n\nThis isn't exactly unheard of, although it sounds like you have a\nparticularly bad case. Cheap commodity PCs tend to have clock hardware\nthat takes multiple microseconds to read ... which was fine thirty years\nago when that hardware design was set, but with modern CPUs that's\npainfully slow.\n\nShort of getting a better machine, you might look into whether you can run\na 64-bit instead of 32-bit operating system. In some cases that allows\na clock reading to happen without a context switch to the kernel.\n\n> This is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?\n\nThe numbers are fine as far as they go, but you should realize that the\nrelative cost of the cheaper plan nodes is being overstated, since the\nadded instrumentation cost is the same per node call regardless of how\nmuch work happens within the node.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 05 Dec 2013 09:22:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain analyze time overhead" }, { "msg_contents": "Hello Tom,\n\nThe hardware is pretty good, I have 8 cpus of Intel(R) Core(TM) i7, 2.4 GH , and 16 Gib of RAM. Is there any configuration parameter that can lead to this issue.\n\nRegards\n\n\n\n\n\nOn Thursday, December 5, 2013 3:23 PM, vincent elschot <[email protected]> wrote:\n \n\n\nOn 05-12-13 15:09, salah jubeh wrote:\n\n\n>\n>Hello guys, \n>\n>\n>\n>When I excute a query,  the exection time is about 1 minute; however, when I execute the query with explain analyze the excution time jumps to 10 minutes. \n>\n>I have tried this for several queries, where  I need to optimize;  and using explain analyze leads alway to a huge time overhead in factor of 10.\n>\n>\n>\n>This is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?\n>\n>\n>\n>Regards\nExplain analyze does a lot more work than just explaining the query,\n it excecutes it and takes not of how long things actually took,\n which itself takes time. Apparently on some machines, it can take\n much longer than just executing the query would take.\n\nFrom the manual:\n\n\"In order to measure the run-time cost of each node in the execution plan, the current implementation of EXPLAIN ANALYZE adds profiling overhead to query execution. As a result, running EXPLAIN ANALYZE on a query can sometimes take significantly longer than executing the query normally. The amount of overhead depends on the nature of the query, as well as the platform being used. The worst case occurs for plan nodes that in themselves require very little time per execution, and on machines that have relatively slow operating system calls for obtaining the time of day.\"\nHello Tom,The hardware is pretty good, I have 8 cpus of Intel(R) Core(TM) i7, 2.4 GH , and 16 Gib of RAM. Is there any configuration parameter that can lead to this issue.Regards On Thursday, December 5, 2013 3:23 PM, vincent elschot <[email protected]> wrote: \n\nOn 05-12-13 15:09, salah jubeh wrote:\n\n\n\n\n\nHello\n guys, \n\n\n\nWhen I\n excute a query,  the exection time is about 1 minute; however,\n when I execute the query with explain analyze the excution\n time jumps to 10 minutes. \n\nI have\n tried this for several queries, where  I need to optimize; \n and using explain analyze leads alway to a huge time overhead\n in factor of 10.\n\n\n\nThis is a\n little bit starnge for me; did any one experience somthing\n like this? Can I trust the generated plans?\n\n\n\nRegards\n\n\n\n Explain analyze does a lot more work than just explaining the query,\n it excecutes it and takes not of how long things actually took,\n which itself takes time. Apparently on some machines, it can take\n much longer than just executing the query would take.\n\n From the manual:\n\"In order to measure the run-time cost of each node\n in the execution plan, the current implementation of EXPLAIN ANALYZE adds\n profiling overhead to query execution. As a result, running EXPLAIN ANALYZE on a\n query can sometimes take significantly longer than executing the\n query normally. The amount of overhead depends on the nature of\n the query, as well as the platform being used. The worst case\n occurs for plan nodes that in themselves require very little time\n per execution, and on machines that have relatively slow operating\n system calls for obtaining the time of day.\"", "msg_date": "Thu, 5 Dec 2013 06:43:47 -0800 (PST)", "msg_from": "salah jubeh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explain analyze time overhead" }, { "msg_contents": "salah jubeh <[email protected]> wrote:\n\n> The hardware is pretty good, I have 8 cpus of Intel(R) Core(TM)\n> i7, 2.4 GH , and 16 Gib of RAM. Is there any configuration\n> parameter that can lead to this issue.\n\nWhat OS?\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 12:29:35 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain analyze time overhead" }, { "msg_contents": "On Thu, Dec 5, 2013 at 09:22:14AM -0500, Tom Lane wrote:\n> salah jubeh <[email protected]> writes:\n> > When I excute a query,� the exection time is about 1 minute; however, when I execute the query with explain analyze the excution time jumps to 10 minutes. \n> \n> This isn't exactly unheard of, although it sounds like you have a\n> particularly bad case. Cheap commodity PCs tend to have clock hardware\n> that takes multiple microseconds to read ... which was fine thirty years\n> ago when that hardware design was set, but with modern CPUs that's\n> painfully slow.\n> \n> Short of getting a better machine, you might look into whether you can run\n> a 64-bit instead of 32-bit operating system. In some cases that allows\n> a clock reading to happen without a context switch to the kernel.\n> \n> > This is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?\n> \n> The numbers are fine as far as they go, but you should realize that the\n> relative cost of the cheaper plan nodes is being overstated, since the\n> added instrumentation cost is the same per node call regardless of how\n> much work happens within the node.\n\nThe original poster might also want to run pg_test_timing to get\nhardware timing overhead:\n\n\thttp://www.postgresql.org/docs/9.3/static/pgtesttiming.html\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 15:40:09 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain analyze time overhead" }, { "msg_contents": ">>>On Thu, Dec  5, 2013 at 09:22:14AM -0500, Tom Lane wrote:\n>>> salah jubeh <[email protected]> writes:\n>>> When I excute a query,�the exection time is about 1 minute; \nhowever, when I execute the query with explain analyze the excution time\n jumps to 10 minutes. \n>> \n>> This isn't exactly unheard of, although it sounds like you have a\n>> particularly bad case.  Cheap commodity PCs tend to have clock hardware\n>> that takes multiple microseconds to read ... which was fine thirty years\n>> ago when that hardware design was set, but with modern CPUs that's\n>> painfully slow.\n>> \n>> Short of getting a better machine, you might look into whether you can run\n>> a 64-bit instead of 32-bit operating system.  In some cases that allows\n>> a clock reading to happen without a context switch to the kernel.\n>> \n>> > This is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?\n>> \n>> The numbers are fine as far as they go, but you should realize that the\n>> relative cost of the cheaper plan nodes is being overstated, since the\n>> added instrumentation cost is the\n same per node call regardless of how\n>> much work happens within the node.\n\n>The original poster might also want to run pg_test_timing to get\n>hardware timing overhead:\n>   http://www.postgresql.org/docs/9.3/static/pgtesttiming.html\n\nThanks for the link, I find it very useful,  unfortunatly I am using 9.1.11 version. \n\nAfter digging a little bit, I find out that the gettimeofday is indeed a little bit slower on this particular machine than other machines, but it is not that significanat difference. The query I am running is not optimized, and for some  reason the material operator is the one which causes most of the overhead. The whole issue is due to cross colums statistics and highly correlated predicates, the number of estimated records are much less than the actual number.  Still, I did not understand completly, why the material operator consume about 9 minutes when I run explain analyze. i.e how many times we call gettimeofday for the material operator -I need to calculate this-? Finally, for testing purposes, I have disabled material  and the query execution time dropped from 1 minute to 12 second.\n\nRegards\n-- \n\n\n\nOn Tuesday, December 10, 2013 9:42 PM, Bruce Momjian <[email protected]> wrote:\n \nOn Thu, Dec  5, 2013 at 09:22:14AM -0500, Tom Lane wrote:\n> salah jubeh <[email protected]> writes:\n> > When I excute a query,�the exection time is about 1 minute; however, when I execute the query with explain analyze the excution time jumps to 10 minutes. \n> \n> This isn't exactly unheard of, although it sounds like you have a\n> particularly bad case.  Cheap commodity PCs tend to have clock hardware\n> that takes multiple microseconds to read ... which was fine thirty years\n> ago when that hardware design was set, but with modern CPUs that's\n> painfully slow.\n> \n> Short of getting a better machine, you might look into whether you can run\n> a 64-bit instead of 32-bit operating system.  In some cases that allows\n> a clock reading\n to happen without a context switch to the kernel.\n> \n> > This is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?\n> \n> The numbers are fine as far as they go, but you should realize that the\n> relative cost of the cheaper plan nodes is being overstated, since the\n> added instrumentation cost is the same per node call regardless of how\n> much work happens within the node.\n\nThe original poster might also want to run pg_test_timing to get\nhardware timing overhead:\n\n    http://www.postgresql.org/docs/9.3/static/pgtesttiming.html\n\n-- \n  Bruce Momjian  <[email protected]>        http://momjian.us\n  EnterpriseDB                            http://enterprisedb.com\n\n  + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n>>>On Thu, Dec  5, 2013 at 09:22:14AM -0500, Tom Lane wrote:>>> salah jubeh <[email protected]> writes:>>> When I excute a query,�the exection time is about 1 minute; \nhowever, when I execute the query with explain analyze the excution time\n jumps to 10 minutes. >> >> This isn't exactly unheard of, although it sounds like you have a>> particularly bad case.  Cheap commodity PCs tend to have clock hardware>> that takes multiple microseconds to read ... which was fine thirty years>> ago when that hardware design was set, but with modern CPUs that's>> painfully slow.>> >> Short of getting a better machine, you might look into whether you can run>> a 64-bit instead of 32-bit operating system.  In some cases that allows>> a clock reading to happen without a context switch to the kernel.>> >> > This is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?>> >> The numbers are fine as far as they go, but you should realize that the>> relative cost of the cheaper plan nodes is being overstated, since the>> added instrumentation cost is the\n same per node call regardless of how>> much work happens within the node.>The original poster might also want to run pg_test_timing to get>hardware timing overhead:>   http://www.postgresql.org/docs/9.3/static/pgtesttiming.htmlThanks for the link, I find it very useful,  unfortunatly I am using 9.1.11 version. After digging a little bit, I find out that the gettimeofday is indeed a little bit slower on this particular machine than other machines, but it is not that significanat difference. The query I am running is not optimized, and for some  reason the material operator is the one which causes most of the overhead. The whole issue is due to cross colums\n statistics and highly correlated predicates, the number of estimated records are much less than the actual number.  Still, I did not understand completly, why the material operator consume about 9 minutes when I run explain analyze. i.e how many times we call gettimeofday for the material operator -I need to calculate this-? Finally, for testing purposes, I have disabled material  and the query execution time dropped from 1 minute to 12 second.Regards-- On Tuesday, December 10, 2013 9:42 PM, Bruce Momjian <[email protected]>\n wrote: On Thu, Dec  5, 2013 at 09:22:14AM -0500, Tom Lane wrote:> salah jubeh <[email protected]> writes:> > When I excute a query,�the exection time is about 1 minute; however, when I execute the query with explain analyze the excution time jumps to 10 minutes. > > This isn't exactly unheard of, although it sounds like you have a> particularly bad case.  Cheap commodity PCs tend to have clock hardware> that takes multiple microseconds to read ... which was fine thirty years> ago when that hardware design was set, but with modern CPUs that's> painfully slow.> > Short of getting a better machine, you might look into whether you can run> a 64-bit instead of 32-bit operating system.  In some cases that allows> a clock reading\n to happen without a context switch to the kernel.> > > This is a little bit starnge for me; did any one experience somthing like this? Can I trust the generated plans?> > The numbers are fine as far as they go, but you should realize that the> relative cost of the cheaper plan nodes is being overstated, since the> added instrumentation cost is the same per node call regardless of how> much work happens within the node.The original poster might also want to run pg_test_timing to gethardware timing overhead:    http://www.postgresql.org/docs/9.3/static/pgtesttiming.html--   Bruce Momjian  <[email protected]>        http://momjian.us  EnterpriseDB                            http://enterprisedb.com  + Everyone has their own god. +-- Sent via pgsql-performance mailing list ([email protected])To make changes to\n your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 10 Dec 2013 13:53:54 -0800 (PST)", "msg_from": "salah jubeh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explain analyze time overhead" }, { "msg_contents": "On Tue, Dec 10, 2013 at 01:53:54PM -0800, salah jubeh wrote:\n> Thanks for the link, I find it very useful, unfortunatly I am using 9.1.11\n> version.\n> \n> After digging a little bit, I find out that the gettimeofday is indeed a little\n> bit slower on this particular machine than other machines, but it is not that\n> significanat difference. The query I am running is not optimized, and for some \n> reason the material operator is the one which causes most of the overhead. The\n> whole issue is due to cross colums statistics and highly correlated predicates,\n> the number of estimated records are much less than the actual number. Still, I\n> did not understand completly, why the material operator consume about 9 minutes\n> when I run explain analyze. i.e how many times we call gettimeofday for the\n> material operator -I need to calculate this-? Finally, for testing purposes, I\n> have disabled material and the query execution time dropped from 1 minute to\n> 12 second.\n\nThe executable is not tied to any particular Postgres version, so you\ncould get the 9.3 binary and just use that.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 16:59:44 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain analyze time overhead" } ]
[ { "msg_contents": "I'm trying to increase the speed of inserts in a database that is on a not\nsuper fast storage system. I have installed a pair of SSDs and placed\npg_xlog on them but am still getting inserts that take up to a second to\ncomplete, with .3 seconds being about average. Iostat doesn't show the SSDs\nstressed at all, and changing synchronous_commit doesn't seem to affect it\none way or the other. Where would I look next for what could be causing the\ndelay?\n\nI'm trying to increase the speed of inserts in a database that is on a not super fast storage system. I have installed a pair of SSDs and placed pg_xlog on them but am still getting inserts that take up to a second to complete, with .3 seconds being about average. Iostat doesn't show the SSDs stressed at all, and changing synchronous_commit doesn't seem to affect it one way or the other. Where would I look next for what could be causing the delay?", "msg_date": "Thu, 5 Dec 2013 09:01:54 -0600", "msg_from": "Skarsol <[email protected]>", "msg_from_op": true, "msg_subject": "WAL + SSD = slow inserts?" }, { "msg_contents": "Hello,\n could you please post the postgresql version, the\npostgresql.conf, the operative system used, the kernel version and the\nfilesystem used ?\n\nThank you\n\n\n2013/12/5 Skarsol <[email protected]>\n\n> I'm trying to increase the speed of inserts in a database that is on a not\n> super fast storage system. I have installed a pair of SSDs and placed\n> pg_xlog on them but am still getting inserts that take up to a second to\n> complete, with .3 seconds being about average. Iostat doesn't show the SSDs\n> stressed at all, and changing synchronous_commit doesn't seem to affect it\n> one way or the other. Where would I look next for what could be causing the\n> delay?\n>\n\nHello,             could you please post the postgresql version, the postgresql.conf, the operative system used, the kernel version and the filesystem used ?Thank you\n2013/12/5 Skarsol <[email protected]>\nI'm trying to increase the speed of inserts in a database that is on a not super fast storage system. I have installed a pair of SSDs and placed pg_xlog on them but am still getting inserts that take up to a second to complete, with .3 seconds being about average. Iostat doesn't show the SSDs stressed at all, and changing synchronous_commit doesn't seem to affect it one way or the other. Where would I look next for what could be causing the delay?", "msg_date": "Thu, 5 Dec 2013 16:06:35 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "psql (PostgreSQL) 9.2.5\nRed Hat Enterprise Linux Server release 6.4 (Santiago)\nLinux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64\nx86_64 x86_64 GNU/Linux\nAll relevant filesystems are ext4\n\nChanges from defaults:\nmax_connections = 500\nshared_buffers = 32000MB\ntemp_buffers = 24MB\nwork_mem = 1GB\nmaintenance_work_mem = 5GB\nwal_level = archive\nwal_buffers = 16MB\ncheckpoint_completion_target = 0.9\narchive_mode = on\narchive_command = 'test ! -f /databases/pg_archive/db/%f && cp %p\n/databases/pg_archive/db/%f'\neffective_cache_size = 64000MB\ndefault_statistics_target = 5000\nlog_checkpoints = on\nstats_temp_directory = '/tmp/pgstat'\n\n\n\nOn Thu, Dec 5, 2013 at 9:06 AM, desmodemone <[email protected]> wrote:\n\n> Hello,\n> could you please post the postgresql version, the\n> postgresql.conf, the operative system used, the kernel version and the\n> filesystem used ?\n>\n> Thank you\n>\n>\n> 2013/12/5 Skarsol <[email protected]>\n>\n>> I'm trying to increase the speed of inserts in a database that is on a\n>> not super fast storage system. I have installed a pair of SSDs and placed\n>> pg_xlog on them but am still getting inserts that take up to a second to\n>> complete, with .3 seconds being about average. Iostat doesn't show the SSDs\n>> stressed at all, and changing synchronous_commit doesn't seem to affect it\n>> one way or the other. Where would I look next for what could be causing the\n>> delay?\n>>\n>\n>\n\npsql (PostgreSQL) 9.2.5Red Hat Enterprise Linux Server release 6.4 (Santiago)Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64 x86_64 x86_64 GNU/LinuxAll relevant filesystems are ext4\nChanges from defaults:max_connections = 500shared_buffers = 32000MBtemp_buffers = 24MB work_mem = 1GBmaintenance_work_mem = 5GB wal_level = archivewal_buffers = 16MB checkpoint_completion_target = 0.9\n\narchive_mode = onarchive_command = 'test ! -f /databases/pg_archive/db/%f && cp %p /databases/pg_archive/db/%f'effective_cache_size = 64000MBdefault_statistics_target = 5000log_checkpoints = on\n\nstats_temp_directory = '/tmp/pgstat'On Thu, Dec 5, 2013 at 9:06 AM, desmodemone <[email protected]> wrote:\nHello,             could you please post the postgresql version, the postgresql.conf, the operative system used, the kernel version and the filesystem used ?\nThank you\n2013/12/5 Skarsol <[email protected]>\nI'm trying to increase the speed of inserts in a database that is on a not super fast storage system. I have installed a pair of SSDs and placed pg_xlog on them but am still getting inserts that take up to a second to complete, with .3 seconds being about average. Iostat doesn't show the SSDs stressed at all, and changing synchronous_commit doesn't seem to affect it one way or the other. Where would I look next for what could be causing the delay?", "msg_date": "Thu, 5 Dec 2013 09:16:35 -0600", "msg_from": "Skarsol <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On Thu, Dec 5, 2013 at 8:16 AM, Skarsol <[email protected]> wrote:\n> psql (PostgreSQL) 9.2.5\n> Red Hat Enterprise Linux Server release 6.4 (Santiago)\n> Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64\n> x86_64 x86_64 GNU/Linux\n> All relevant filesystems are ext4\n>\n> Changes from defaults:\n> max_connections = 500\n> shared_buffers = 32000MB\n> temp_buffers = 24MB\n> work_mem = 1GB\n> maintenance_work_mem = 5GB\n> wal_level = archive\n> wal_buffers = 16MB\n> checkpoint_completion_target = 0.9\n> archive_mode = on\n> archive_command = 'test ! -f /databases/pg_archive/db/%f && cp %p\n> /databases/pg_archive/db/%f'\n> effective_cache_size = 64000MB\n> default_statistics_target = 5000\n> log_checkpoints = on\n> stats_temp_directory = '/tmp/pgstat'\n\nOK I'd make the following changes.\n1: Drop shared_buffers to something like 1000MB\n2: drop work_mem to 16MB or so. 1GB is pathological, as it can make\nthe machine run out of memory quite fast.\n3: drop max_connections to 100 or so. if you really need 500 conns,\nthen work_mem of 1G is that much worse.\n\nNext, move pg_xlog OFF the SSDs and back onto spinning media and put\nyour data/base dir on the SSDs.\n\nSSDs aren't much faster, if at all, for pg_xlog, but are much much\nfaster for data/base files.\n\nAlso changing the io schduler for the SSDs to noop:\n\nhttp://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/switching-sched.txt?id=HEAD\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 08:50:25 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, Dec 5, 2013 at 8:16 AM, Skarsol <[email protected]> wrote:\n> > psql (PostgreSQL) 9.2.5\n> > Red Hat Enterprise Linux Server release 6.4 (Santiago)\n> > Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013\n> x86_64\n> > x86_64 x86_64 GNU/Linux\n> > All relevant filesystems are ext4\n> >\n> > Changes from defaults:\n> > max_connections = 500\n> > shared_buffers = 32000MB\n> > temp_buffers = 24MB\n> > work_mem = 1GB\n> > maintenance_work_mem = 5GB\n> > wal_level = archive\n> > wal_buffers = 16MB\n> > checkpoint_completion_target = 0.9\n> > archive_mode = on\n> > archive_command = 'test ! -f /databases/pg_archive/db/%f && cp %p\n> > /databases/pg_archive/db/%f'\n> > effective_cache_size = 64000MB\n> > default_statistics_target = 5000\n> > log_checkpoints = on\n> > stats_temp_directory = '/tmp/pgstat'\n>\n> OK I'd make the following changes.\n> 1: Drop shared_buffers to something like 1000MB\n> 2: drop work_mem to 16MB or so. 1GB is pathological, as it can make\n> the machine run out of memory quite fast.\n> 3: drop max_connections to 100 or so. if you really need 500 conns,\n> then work_mem of 1G is that much worse.\n>\n> Next, move pg_xlog OFF the SSDs and back onto spinning media and put\n> your data/base dir on the SSDs.\n>\n> SSDs aren't much faster, if at all, for pg_xlog, but are much much\n> faster for data/base files.\n>\n> Also changing the io schduler for the SSDs to noop:\n>\n>\n> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/switching-sched.txt?id=HEAD\n>\n\nChanging the scheduler to noop seems to have had a decent effect. I've made\nthe other recommended changes other than the connections as we do need that\nmany currently. We're looking to implement pg_bouncer which should help\nwith that.\n\nMoving the whole database to SSD isn't an option currently due to size.\n\nThe slowest inserts are happening on tables that are partitioned by\ncreation time. As part of the process there is a rule to select curval from\na sequence but there are no other selects or anything in the trigger\nprocedure. Could the sequence be slowing it down? I dont see a way to\nchange the tablespace of one.\n\nOn Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Dec 5, 2013 at 8:16 AM, Skarsol <[email protected]> wrote:\n\n> psql (PostgreSQL) 9.2.5\n> Red Hat Enterprise Linux Server release 6.4 (Santiago)\n> Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64\n> x86_64 x86_64 GNU/Linux\n> All relevant filesystems are ext4\n>\n> Changes from defaults:\n> max_connections = 500\n> shared_buffers = 32000MB\n> temp_buffers = 24MB\n> work_mem = 1GB\n> maintenance_work_mem = 5GB\n> wal_level = archive\n> wal_buffers = 16MB\n> checkpoint_completion_target = 0.9\n> archive_mode = on\n> archive_command = 'test ! -f /databases/pg_archive/db/%f && cp %p\n> /databases/pg_archive/db/%f'\n> effective_cache_size = 64000MB\n> default_statistics_target = 5000\n> log_checkpoints = on\n> stats_temp_directory = '/tmp/pgstat'\n\nOK I'd make the following changes.\n1: Drop shared_buffers to something like 1000MB\n2: drop work_mem to 16MB or so. 1GB is pathological, as it can make\nthe machine run out of memory quite fast.\n3: drop max_connections to 100 or so. if you really need 500 conns,\nthen work_mem of 1G is that much worse.\n\nNext, move pg_xlog OFF the SSDs and back onto spinning media and put\nyour data/base dir on the SSDs.\n\nSSDs aren't much faster, if at all, for pg_xlog, but are much much\nfaster for data/base files.\n\nAlso changing the io schduler for the SSDs to noop:\n\nhttp://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/switching-sched.txt?id=HEAD\nChanging the scheduler to noop seems to have had a decent effect. I've made the other recommended changes other than the connections as we do need that many currently. We're looking to implement pg_bouncer which should help with that.\nMoving the whole database to SSD isn't an option currently due to size.The slowest inserts are happening on tables that are partitioned by creation time. As part of the process there is a rule to select curval from a sequence but there are no other selects or anything in  the trigger procedure. Could the sequence be slowing it down? I dont see a way to change the tablespace of one.", "msg_date": "Thu, 5 Dec 2013 10:13:49 -0600", "msg_from": "Skarsol <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On Thu, Dec 5, 2013 at 7:01 AM, Skarsol <[email protected]> wrote:\n\n> I'm trying to increase the speed of inserts in a database that is on a not\n> super fast storage system. I have installed a pair of SSDs and placed\n> pg_xlog on them but am still getting inserts that take up to a second to\n> complete, with .3 seconds being about average. Iostat doesn't show the SSDs\n> stressed at all, and changing synchronous_commit doesn't seem to affect it\n> one way or the other. Where would I look next for what could be causing the\n> delay?\n>\n\nWhat are you inserting? At 0.3 seconds per, I'm guessing this is not just\na simple single-row insert statement.\n\nAre you IO bound or CPU bound?\n\nOn Thu, Dec 5, 2013 at 7:01 AM, Skarsol <[email protected]> wrote:\nI'm trying to increase the speed of inserts in a database that is on a not super fast storage system. I have installed a pair of SSDs and placed pg_xlog on them but am still getting inserts that take up to a second to complete, with .3 seconds being about average. Iostat doesn't show the SSDs stressed at all, and changing synchronous_commit doesn't seem to affect it one way or the other. Where would I look next for what could be causing the delay?\nWhat are you inserting?  At 0.3 seconds per, I'm guessing this is not just a simple single-row insert statement. Are you IO bound or CPU bound?", "msg_date": "Thu, 5 Dec 2013 09:56:56 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On Thu, Dec 5, 2013 at 9:13 AM, Skarsol <[email protected]> wrote:\n> On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Thu, Dec 5, 2013 at 8:16 AM, Skarsol <[email protected]> wrote:\n>> > psql (PostgreSQL) 9.2.5\n>> > Red Hat Enterprise Linux Server release 6.4 (Santiago)\n>> > Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013\n>> > x86_64\n>> > x86_64 x86_64 GNU/Linux\n>> > All relevant filesystems are ext4\n>> >\n>> > Changes from defaults:\n>> > max_connections = 500\n>> > shared_buffers = 32000MB\n>> > temp_buffers = 24MB\n>> > work_mem = 1GB\n>> > maintenance_work_mem = 5GB\n>> > wal_level = archive\n>> > wal_buffers = 16MB\n>> > checkpoint_completion_target = 0.9\n>> > archive_mode = on\n>> > archive_command = 'test ! -f /databases/pg_archive/db/%f && cp %p\n>> > /databases/pg_archive/db/%f'\n>> > effective_cache_size = 64000MB\n>> > default_statistics_target = 5000\n>> > log_checkpoints = on\n>> > stats_temp_directory = '/tmp/pgstat'\n>>\n>> OK I'd make the following changes.\n>> 1: Drop shared_buffers to something like 1000MB\n>> 2: drop work_mem to 16MB or so. 1GB is pathological, as it can make\n>> the machine run out of memory quite fast.\n>> 3: drop max_connections to 100 or so. if you really need 500 conns,\n>> then work_mem of 1G is that much worse.\n>>\n>> Next, move pg_xlog OFF the SSDs and back onto spinning media and put\n>> your data/base dir on the SSDs.\n>>\n>> SSDs aren't much faster, if at all, for pg_xlog, but are much much\n>> faster for data/base files.\n>>\n>> Also changing the io schduler for the SSDs to noop:\n>>\n>>\n>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/switching-sched.txt?id=HEAD\n>\n>\n> Changing the scheduler to noop seems to have had a decent effect. I've made\n> the other recommended changes other than the connections as we do need that\n> many currently. We're looking to implement pg_bouncer which should help with\n> that.\n>\n> Moving the whole database to SSD isn't an option currently due to size.\n>\n> The slowest inserts are happening on tables that are partitioned by creation\n> time. As part of the process there is a rule to select curval from a\n> sequence but there are no other selects or anything in the trigger\n> procedure. Could the sequence be slowing it down? I dont see a way to change\n> the tablespace of one.\n\nRules have a lot of overhead. Is there a reason you're not using\ndefaults or triggers?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 5 Dec 2013 11:08:14 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On Thu, Dec 5, 2013 at 10:08 AM, Scott Marlowe <[email protected]>wrote:\n\n> Rules have a lot of overhead. Is there a reason you're not using\n> defaults or triggers?\n>\n\nOr for even less overhead, load the partitions directly, and preferably use\n\"DEFAULT nextval('some_sequence')\" as Scott mentioned.\n\nOn Thu, Dec 5, 2013 at 10:08 AM, Scott Marlowe <[email protected]> wrote:\n\nRules have a lot of overhead. Is there a reason you're not using\ndefaults or triggers?Or for even less overhead, load the partitions directly, and preferably use \"DEFAULT nextval('some_sequence')\" as Scott mentioned.", "msg_date": "Thu, 5 Dec 2013 11:19:26 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On 06/12/13 05:13, Skarsol wrote:\n> On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe <[email protected]>wrote:\n>\n>> On Thu, Dec 5, 2013 at 8:16 AM, Skarsol <[email protected]> wrote:\n>>> psql (PostgreSQL) 9.2.5\n>>> Red Hat Enterprise Linux Server release 6.4 (Santiago)\n>>> Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013\n>> x86_64\n>>> x86_64 x86_64 GNU/Linux\n>>> All relevant filesystems are ext4\n>>>\n>>> Changes from defaults:\n>>> max_connections = 500\n>>> shared_buffers = 32000MB\n>>> temp_buffers = 24MB\n>>> work_mem = 1GB\n>>> maintenance_work_mem = 5GB\n>>> wal_level = archive\n>>> wal_buffers = 16MB\n>>> checkpoint_completion_target = 0.9\n>>> archive_mode = on\n>>> archive_command = 'test ! -f /databases/pg_archive/db/%f && cp %p\n>>> /databases/pg_archive/db/%f'\n>>> effective_cache_size = 64000MB\n>>> default_statistics_target = 5000\n>>> log_checkpoints = on\n>>> stats_temp_directory = '/tmp/pgstat'\n>> OK I'd make the following changes.\n>> 1: Drop shared_buffers to something like 1000MB\n>> 2: drop work_mem to 16MB or so. 1GB is pathological, as it can make\n>> the machine run out of memory quite fast.\n>> 3: drop max_connections to 100 or so. if you really need 500 conns,\n>> then work_mem of 1G is that much worse.\n>>\n>> Next, move pg_xlog OFF the SSDs and back onto spinning media and put\n>> your data/base dir on the SSDs.\n>>\n>> SSDs aren't much faster, if at all, for pg_xlog, but are much much\n>> faster for data/base files.\n>>\n>> Also changing the io schduler for the SSDs to noop:\n>>\n>>\n>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/switching-sched.txt?id=HEAD\n>>\n> Changing the scheduler to noop seems to have had a decent effect. I've made\n> the other recommended changes other than the connections as we do need that\n> many currently. We're looking to implement pg_bouncer which should help\n> with that.\n>\n\n\nWhat model SSD are you using? Some can work better with deadline than \nnoop (as their own scheduling firmware may be pretty poor). Also, check \nif there are updates for the SSD firmware. I have a couple of Crucial \nM4s that changed from being fairly average to very fast indeed after \ngetting later firmware...\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 06 Dec 2013 11:01:00 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On Thu, Dec 5, 2013 at 1:19 PM, bricklen <[email protected]> wrote:\n\n>\n> On Thu, Dec 5, 2013 at 10:08 AM, Scott Marlowe <[email protected]>wrote:\n>\n>> Rules have a lot of overhead. Is there a reason you're not using\n>> defaults or triggers?\n>>\n>\n> Or for even less overhead, load the partitions directly, and preferably\n> use \"DEFAULT nextval('some_sequence')\" as Scott mentioned.\n>\n>\nThe rule is being used to return the id of the insert, it's not part of the\npartitioning itself. The id is generated with default nextval. I've looked\nat using returning instead but that will require a large refactoring of the\ncodebase and seems to have issues when combined with the partitioning. The\npartitioning is done with a BEFORE INSERT ON trigger. The trigger proc\ndoesn't do any selects, it's just based on the contents of the insert\nitself.\n\nOn Thu, Dec 5, 2013 at 1:19 PM, bricklen <[email protected]> wrote:\nOn Thu, Dec 5, 2013 at 10:08 AM, Scott Marlowe <[email protected]> wrote:\n\nRules have a lot of overhead. Is there a reason you're not using\ndefaults or triggers?Or for even less overhead, load the partitions directly, and preferably use \"DEFAULT nextval('some_sequence')\" as Scott mentioned.\n\n\nThe rule is being used to return the id of the insert, it's not part of the partitioning itself. The id is generated with default nextval. I've looked at using returning instead but that will require a large refactoring of the codebase and seems to have issues when combined with the partitioning. The partitioning is done with a BEFORE INSERT ON trigger. The trigger proc doesn't do any selects, it's just based on the contents of the insert itself.", "msg_date": "Thu, 5 Dec 2013 23:55:27 -0600", "msg_from": "Skarsol <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On Thu, Dec 5, 2013 at 9:55 PM, Skarsol <[email protected]> wrote:\n\n> The rule is being used to return the id of the insert...\n>\n\nTake a look at the RETURNING clause of the INSERT statement. That should\nmeet your needs here without having to bother with rules.\n\nrls\n\n-- \n:wq\n\nOn Thu, Dec 5, 2013 at 9:55 PM, Skarsol <[email protected]> wrote:\nThe rule is being used to return the id of the insert...Take a look at the RETURNING clause of the INSERT statement. That should meet your needs here without having to bother with rules.\nrls-- :wq", "msg_date": "Thu, 5 Dec 2013 22:03:44 -0800", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL + SSD = slow inserts?" }, { "msg_contents": "On 5.12.2013 17:13, Skarsol wrote:\n> On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> On Thu, Dec 5, 2013 at 8:16 AM, Skarsol <[email protected]\n> <mailto:[email protected]>> wrote:\n> > psql (PostgreSQL) 9.2.5\n> > Red Hat Enterprise Linux Server release 6.4 (Santiago)\n> > Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT\n> 2013 x86_64\n> > x86_64 x86_64 GNU/Linux\n> > All relevant filesystems are ext4\n> >\n> > Changes from defaults:\n> > max_connections = 500\n> > shared_buffers = 32000MB\n> > temp_buffers = 24MB\n> > work_mem = 1GB\n> > maintenance_work_mem = 5GB\n> > wal_level = archive\n> > wal_buffers = 16MB\n> > checkpoint_completion_target = 0.9\n> > archive_mode = on\n> > archive_command = 'test ! -f /databases/pg_archive/db/%f && cp %p\n> > /databases/pg_archive/db/%f'\n> > effective_cache_size = 64000MB\n> > default_statistics_target = 5000\n> > log_checkpoints = on\n> > stats_temp_directory = '/tmp/pgstat'\n> \n> OK I'd make the following changes.\n> 1: Drop shared_buffers to something like 1000MB\n> 2: drop work_mem to 16MB or so. 1GB is pathological, as it can make\n> the machine run out of memory quite fast.\n> 3: drop max_connections to 100 or so. if you really need 500 conns,\n> then work_mem of 1G is that much worse.\n> \n> Next, move pg_xlog OFF the SSDs and back onto spinning media and put\n> your data/base dir on the SSDs.\n> \n> SSDs aren't much faster, if at all, for pg_xlog, but are much much\n> faster for data/base files.\n> \n> Also changing the io schduler for the SSDs to noop:\n> \n> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/switching-sched.txt?id=HEAD\n> \n> \n> Changing the scheduler to noop seems to have had a decent effect. I've\n> made the other recommended changes other than the connections as we do\n> need that many currently. We're looking to implement pg_bouncer which\n> should help with that.\n\nI'm wondering if you left checkpoint_segments at the default? Try\nbumping it up to 32 or more, otherwise it might cause frequent\ncheckpoints. I see you have log_checkpoints=on, so do you see any\ncheckpoint messages in the logs?\n\nAlso, how much data are you actually inserting? Are you inserting a\nsingle row, or large number of them? What is the structure of the table?\nAre there any foreign keys in it?\n\nIf you do a batch of such inserts (so that it takes a minute or so in\ntotal), what do you see in top/iostat? Is the system CPU or IO bound?\nShow us a dozen lines of\n\n iostat -x -k 1\n vmstat 1\n\n> Moving the whole database to SSD isn't an option currently due to size.\n\nMoving the WAL to SSDs is rather wasteful, in my experience. A RAID\ncontroller with decent write cache (256MB or more) and BBU is both\nfaster and cheaper.\n\nAlso, there are huge differences between various SSDs vendors and\nmodels, or even between the same SSD model with different firmware\nversions. What SSD model are you using? Have you updated the firmware?\n\n> The slowest inserts are happening on tables that are partitioned by\n> creation time. As part of the process there is a rule to select curval\n> from a sequence but there are no other selects or anything in the\n> trigger procedure. Could the sequence be slowing it down? I dont see a\n> way to change the tablespace of one.\n\nThere are many possible causes for this - for example you're not telling\nus something about the table structure (e.g. FK constraints might be\ncausing this) or about the hardware.\n\nHave you done some tests on the SSD to verify it works properly? I've\nseen broken firmwares behaving like this (unexpectedly high latencies in\nrandom intervals etc.).\n\nTomas\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 07 Dec 2013 00:59:26 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL + SSD = slow inserts?" } ]
[ { "msg_contents": "I'm managing a database that is adding about 10-20M records per day to a\ntable and time is a core part of most queries, so I've been looking into\nseeing if I need to start using partitioning based on the time column and\nI've found these general guidelines:\n\nDon't use more than about 50 paritions (\nhttp://www.postgresql.org/message-id/[email protected] )\nUse triggers to make the interface easier (\nhttps://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and\nhttp://stackoverflow.com/questions/16049396/postgres-partition-by-week )\n\nThe only data I found fell inline with what you'd expect (i.e. speeds up\nselects but slows down inserts/updates\nhttp://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/)\n\nSo I was thinking that partitioning based on month to keep the number of\npartitions low, so that would mean about 0.5G records in each table. Does\nthat seem like a reasonable number of records in each partition? Is there\nanything else that I should consider or be aware of?\n\nThanks,\nDave\n\nI'm managing a database that is adding about 10-20M records per day to a table and time is a core part of most queries, so I've been looking into seeing if I need to start using partitioning based on the time column and I've found these general guidelines:\nDon't use more than about 50 paritions ( http://www.postgresql.org/message-id/[email protected] )\nUse triggers to make the interface easier ( https://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and http://stackoverflow.com/questions/16049396/postgres-partition-by-week )\nThe only data I found fell inline with what you'd expect (i.e. speeds up selects but slows down inserts/updates http://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/ )\nSo I was thinking that partitioning based on month to keep the number of partitions low, so that would mean about 0.5G records in each table. Does that seem like a reasonable number of records in each partition? Is there anything else that I should consider or be aware of?\nThanks,Dave", "msg_date": "Thu, 5 Dec 2013 08:36:21 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Recommendations for partitioning?" }, { "msg_contents": "Hi Dave,\n About the number of partitions , I didn't have so much\nproblems with hundreds of partitions ( like 360 days in a year ).\nMoreover you could bypass the overhead of trigger with a direct insert on\nthe partition, also to have a parallel insert without to firing too much\nthe trigger. Remember to enable the check constraints..\nIn my opinion it's better you try to have less rows/partition. How much is\nthe average row length in byte ? If you will have to rebuild indexes , it\nwill be possible , if the partition it's too big, that the\nmaintenance_work_mem will be not enough and you will sort on disk.\nI think you have to evaluate also to divide the partitions on different\ntablespaces so to spread the i/o on different storage types/number ( and so\non ) and to manage with different strategy the indexes (it's possible the\nsearches will be different on \"historical\" partitions and on \"live\"\npartitions).\nAnother strategy it's also, not only to create partitions, but to shard\ndata between more nodes.\n\n\nBye\n\nMat\n\n\n2013/12/5 Dave Johansen <[email protected]>\n\n> I'm managing a database that is adding about 10-20M records per day to a\n> table and time is a core part of most queries, so I've been looking into\n> seeing if I need to start using partitioning based on the time column and\n> I've found these general guidelines:\n>\n> Don't use more than about 50 paritions (\n> http://www.postgresql.org/message-id/[email protected] )\n> Use triggers to make the interface easier (\n> https://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and\n> http://stackoverflow.com/questions/16049396/postgres-partition-by-week )\n>\n> The only data I found fell inline with what you'd expect (i.e. speeds up\n> selects but slows down inserts/updates\n> http://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/)\n>\n> So I was thinking that partitioning based on month to keep the number of\n> partitions low, so that would mean about 0.5G records in each table. Does\n> that seem like a reasonable number of records in each partition? Is there\n> anything else that I should consider or be aware of?\n>\n> Thanks,\n> Dave\n>\n\nHi Dave,              About the number of partitions , I didn't have so much problems with hundreds of partitions ( like 360 days in a year ). Moreover you could bypass the overhead of trigger with a direct insert on the partition, also to have a parallel insert without to firing too much the trigger. Remember to enable the check constraints..\nIn my opinion it's better you try to have less rows/partition. How much is the average row length in byte ? If you will have to rebuild indexes , it will be possible , if the partition it's too big, that the maintenance_work_mem will be not enough and you will sort on disk.\nI think you have to evaluate also to divide the partitions on  different tablespaces so to spread the i/o on different storage types/number ( and so on ) and to manage with different strategy the indexes (it's possible the searches will be different on \"historical\" partitions and on \"live\" partitions).\nAnother strategy it's also, not only to create partitions, but to shard data between more nodes.ByeMat\n2013/12/5 Dave Johansen <[email protected]>\nI'm managing a database that is adding about 10-20M records per day to a table and time is a core part of most queries, so I've been looking into seeing if I need to start using partitioning based on the time column and I've found these general guidelines:\nDon't use more than about 50 paritions ( http://www.postgresql.org/message-id/[email protected] )\n\nUse triggers to make the interface easier ( https://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and http://stackoverflow.com/questions/16049396/postgres-partition-by-week )\nThe only data I found fell inline with what you'd expect (i.e. speeds up selects but slows down inserts/updates http://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/ )\nSo I was thinking that partitioning based on month to keep the number of partitions low, so that would mean about 0.5G records in each table. Does that seem like a reasonable number of records in each partition? Is there anything else that I should consider or be aware of?\nThanks,Dave", "msg_date": "Sat, 7 Dec 2013 18:09:19 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On Thu, Dec 5, 2013 at 7:36 AM, Dave Johansen <[email protected]>wrote:\n\n> I'm managing a database that is adding about 10-20M records per day to a\n> table and time is a core part of most queries,\n>\n\n\nWhat is the nature of how the time column is used in the queries?\nDepending on how it is used, you might not get much improvement at all, or\nyou might get N fold improvement, or you might find that re-designing your\nindexes could get you the same query improvement that partitioning would,\nbut with less headache.\n\n\n> so I've been looking into seeing if I need to start using partitioning\n> based on the time column and I've found these general guidelines:\n>\n> Don't use more than about 50 paritions (\n> http://www.postgresql.org/message-id/[email protected] )\n> Use triggers to make the interface easier (\n> https://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and\n> http://stackoverflow.com/questions/16049396/postgres-partition-by-week )\n>\n\nUsing triggers slows INSERTs down by a lot (unless they were already slow\ndue to the need to hit disk to maintain the indexes or something like\nthat). Are you sure you can handle that slow down, given your insertion\nrate? You could get the best of both worlds by having your bulk loaders\ntarget the correct partition directly, but also have the triggers on the\nparent table for any programs that don't get the message.\n\n\n>\n> The only data I found fell inline with what you'd expect (i.e. speeds up\n> selects but slows down inserts/updates\n> http://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/)\n>\n\n\nOne of the big benefits of partitioning can be to speed up insertions\ntremendously, by keeping the hot part of the indices that need to be\nmaintained upon insertion together in shared_buffers.\n\n\n>\n> So I was thinking that partitioning based on month to keep the number of\n> partitions low, so that would mean about 0.5G records in each table. Does\n> that seem like a reasonable number of records in each partition? Is there\n> anything else that I should consider or be aware of?\n>\n\nHow will data be expired? Hows does the size of one of your intended\npartitions compare to your RAM and shared_buffers.\n\nCheers,\n\nJeff\n\nOn Thu, Dec 5, 2013 at 7:36 AM, Dave Johansen <[email protected]> wrote:\nI'm managing a database that is adding about 10-20M records per day to a table and time is a core part of most queries, \nWhat is the nature of how the time column is used in the queries?   Depending on how it is used, you might not get much improvement at all, or you might get N fold improvement, or you might find that re-designing your indexes could get you the same query improvement that partitioning would, but with less headache.\n so I've been looking into seeing if I need to start using partitioning based on the time column and I've found these general guidelines:\nDon't use more than about 50 paritions ( http://www.postgresql.org/message-id/[email protected] )\n\nUse triggers to make the interface easier ( https://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and http://stackoverflow.com/questions/16049396/postgres-partition-by-week )\nUsing triggers slows INSERTs down by a lot (unless they were already slow due to the need to hit disk to maintain the indexes or something like that).  Are you sure you can handle that slow down, given your insertion rate?  You could get the best of both worlds by having your bulk loaders target the correct partition directly, but also have the triggers on the parent table for any programs that don't get the message.\n \nThe only data I found fell inline with what you'd expect (i.e. speeds up selects but slows down inserts/updates http://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/ )\nOne of the big benefits of partitioning can be to speed up insertions tremendously, by keeping the hot part of the indices that need to be maintained upon insertion together in shared_buffers.\n \nSo I was thinking that partitioning based on month to keep the number of partitions low, so that would mean about 0.5G records in each table. Does that seem like a reasonable number of records in each partition? Is there anything else that I should consider or be aware of?\nHow will data be expired?  Hows does the size of one of your intended partitions compare to your RAM and shared_buffers. Cheers,\nJeff", "msg_date": "Sat, 7 Dec 2013 12:29:09 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On Sat, Dec 7, 2013 at 10:09 AM, desmodemone <[email protected]> wrote:\n> Hi Dave,\n> About the number of partitions , I didn't have so much\n> problems with hundreds of partitions ( like 360 days in a year ).\n> Moreover you could bypass the overhead of trigger with a direct insert on\n> the partition, also to have a parallel insert without to firing too much the\n> trigger. Remember to enable the check constraints..\n> In my opinion it's better you try to have less rows/partition. How much is\n> the average row length in byte ? If you will have to rebuild indexes , it\n> will be possible , if the partition it's too big, that the\n> maintenance_work_mem will be not enough and you will sort on disk.\n> I think you have to evaluate also to divide the partitions on different\n> tablespaces so to spread the i/o on different storage types/number ( and so\n> on ) and to manage with different strategy the indexes (it's possible the\n> searches will be different on \"historical\" partitions and on \"live\"\n> partitions).\n> Another strategy it's also, not only to create partitions, but to shard data\n> between more nodes.\n\nI agree on the number of partitions. I've run a stats db with daily\npartitions with about 2 years data in it with no real problems due to\nhigh numbers of partitions. Somewhere around 1,000 things start to get\nslower.\n\nI'll add that you can use assymetric partitioning if you tend to do a\nlot of more fine grained queries on recent data and more big roll up\non older ones. I.e. partition by month except for the last 30 days, do\nit by day etc. Then at the end of the month roll all the days into a\nmonth partition and delete them.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 7 Dec 2013 13:37:12 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "Sorry for the delay response. We had some hardware/configuration issues\nthat appear to be solved now, so now we're starting to actually play with\nmodifying the database.\n\nOn Sat, Dec 7, 2013 at 1:29 PM, Jeff Janes <[email protected]> wrote:\n\n> On Thu, Dec 5, 2013 at 7:36 AM, Dave Johansen <[email protected]>wrote:\n>\n>> I'm managing a database that is adding about 10-20M records per day to a\n>> table and time is a core part of most queries,\n>>\n>\n>\n> What is the nature of how the time column is used in the queries?\n> Depending on how it is used, you might not get much improvement at all, or\n> you might get N fold improvement, or you might find that re-designing your\n> indexes could get you the same query improvement that partitioning would,\n> but with less headache.\n>\n\nThe time column is usually used to calculate statistics, find/analyze\nduplicates, analyze data contents, etc on a specific time window. So there\nwill be queries with GROUP BY and WINDOWs with a specific time filter in\nthe where clause.\n\n\n>\n> so I've been looking into seeing if I need to start using partitioning\n>> based on the time column and I've found these general guidelines:\n>>\n>> Don't use more than about 50 paritions (\n>> http://www.postgresql.org/message-id/[email protected] )\n>> Use triggers to make the interface easier (\n>> https://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and\n>> http://stackoverflow.com/questions/16049396/postgres-partition-by-week )\n>>\n>\n> Using triggers slows INSERTs down by a lot (unless they were already slow\n> due to the need to hit disk to maintain the indexes or something like\n> that). Are you sure you can handle that slow down, given your insertion\n> rate? You could get the best of both worlds by having your bulk loaders\n> target the correct partition directly, but also have the triggers on the\n> parent table for any programs that don't get the message.\n>\n\nInserting directly into the correct partition whenever possible and leaving\nthe trigger on the parent table seems like the best option.\n\n\n>\n>> The only data I found fell inline with what you'd expect (i.e. speeds up\n>> selects but slows down inserts/updates\n>> http://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/)\n>>\n>\n>\n> One of the big benefits of partitioning can be to speed up insertions\n> tremendously, by keeping the hot part of the indices that need to be\n> maintained upon insertion together in shared_buffers.\n>\n\nWe insert lots of new data, but rarely modify existing data once it's in\nthe database, so it sounds like this would be a big benefit for us.\n\n\n>\n>\n>> So I was thinking that partitioning based on month to keep the number of\n>> partitions low, so that would mean about 0.5G records in each table. Does\n>> that seem like a reasonable number of records in each partition? Is there\n>> anything else that I should consider or be aware of?\n>>\n>\n> How will data be expired? Hows does the size of one of your intended\n> partitions compare to your RAM and shared_buffers.\n>\n\nWe add about 10-20 million records per day with each being about 200 bytes\nin size (there's a bytea in there with that being the average size) to each\ntable and there's 64 GB of RAM on the machine.\n\n\n>\n> Cheers,\n>\n> Jeff\n>\n\nOn Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe <[email protected]>wrote:\n> I'll add that you can use assymetric partitioning if you tend to do a\n> lot of more fine grained queries on recent data and more big roll up\n> on older ones. I.e. partition by month except for the last 30 days, do\n> it by day etc. Then at the end of the month roll all the days into a\n> month partition and delete them.\n\nThis sounds like a great solution for us. Is there some trick to roll the\nrecords from one partition to another? Or is the only way just a SELECT\nINTO followed by a DELETE?\n\nThanks,\nDave\n\nSorry for the delay response. We had some hardware/configuration issues that appear to be solved now, so now we're starting to actually play with modifying the database.\nOn Sat, Dec 7, 2013 at 1:29 PM, Jeff Janes <[email protected]> wrote:\nOn Thu, Dec 5, 2013 at 7:36 AM, Dave Johansen <[email protected]> wrote:\nI'm managing a database that is adding about 10-20M records per day to a table and time is a core part of most queries, \nWhat is the nature of how the time column is used in the queries?   Depending on how it is used, you might not get much improvement at all, or you might get N fold improvement, or you might find that re-designing your indexes could get you the same query improvement that partitioning would, but with less headache.\nThe time column is usually used to calculate statistics, find/analyze duplicates, analyze data contents, etc on a specific time window. So there will be queries with GROUP BY and WINDOWs with a specific time filter in the where clause.\n \nso I've been looking into seeing if I need to start using partitioning based on the time column and I've found these general guidelines:\nDon't use more than about 50 paritions ( http://www.postgresql.org/message-id/[email protected] )\n\nUse triggers to make the interface easier ( https://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and http://stackoverflow.com/questions/16049396/postgres-partition-by-week )\nUsing triggers slows INSERTs down by a lot (unless they were already slow due to the need to hit disk to maintain the indexes or something like that).  Are you sure you can handle that slow down, given your insertion rate?  You could get the best of both worlds by having your bulk loaders target the correct partition directly, but also have the triggers on the parent table for any programs that don't get the message.\nInserting directly into the correct partition whenever possible and leaving the trigger on the parent table seems like the best option. \n\n\nThe only data I found fell inline with what you'd expect (i.e. speeds up selects but slows down inserts/updates http://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/ )\nOne of the big benefits of partitioning can be to speed up insertions tremendously, by keeping the hot part of the indices that need to be maintained upon insertion together in shared_buffers.\nWe insert lots of new data, but rarely modify existing data once it's in the database, so it sounds like this would be a big benefit for us. \n\n So I was thinking that partitioning based on month to keep the number of partitions low, so that would mean about 0.5G records in each table. Does that seem like a reasonable number of records in each partition? Is there anything else that I should consider or be aware of?\nHow will data be expired?  Hows does the size of one of your intended partitions compare to your RAM and shared_buffers.\nWe add about 10-20 million records per day with each being about 200 bytes in size (there's a bytea in there with that being the average size) to each table and there's 64 GB of RAM on the machine.\n  Cheers,\n\nJeff\nOn Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe <[email protected]> wrote:> I'll add that you can use assymetric partitioning if you tend to do a\n> lot of more fine grained queries on recent data and more big roll up> on older ones. I.e. partition by month except for the last 30 days, do> it by day etc. Then at the end of the month roll all the days into a\n> month partition and delete them.This sounds like a great solution for us. Is there some trick to roll the records from one partition to another? Or is the only way just a SELECT INTO followed by a DELETE?\nThanks,Dave", "msg_date": "Thu, 19 Dec 2013 09:53:35 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On Thu, Dec 19, 2013 at 9:53 AM, Dave Johansen <[email protected]> wrote:\n>>\n> On Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe <[email protected]>\n> wrote:\n>> I'll add that you can use assymetric partitioning if you tend to do a\n>> lot of more fine grained queries on recent data and more big roll up\n>> on older ones. I.e. partition by month except for the last 30 days, do\n>> it by day etc. Then at the end of the month roll all the days into a\n>> month partition and delete them.\n>\n> This sounds like a great solution for us. Is there some trick to roll the\n> records from one partition to another? Or is the only way just a SELECT INTO\n> followed by a DELETE?\n\nThat's pretty much it. What I did was to create the new month table\nand day tables, alter my triggers to reflect this, then move the data\nwith insert into / select from query for each old day partition. Then\nonce their data is moved you can just drop them. Since you changed the\ntriggers first those tables are no long taking input so it's usually\nsafe to drop them now.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 10:27:54 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On Thu, Dec 19, 2013 at 10:27 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, Dec 19, 2013 at 9:53 AM, Dave Johansen <[email protected]>\n> wrote:\n> >>\n> > On Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe <[email protected]>\n> > wrote:\n> >> I'll add that you can use assymetric partitioning if you tend to do a\n> >> lot of more fine grained queries on recent data and more big roll up\n> >> on older ones. I.e. partition by month except for the last 30 days, do\n> >> it by day etc. Then at the end of the month roll all the days into a\n> >> month partition and delete them.\n> >\n> > This sounds like a great solution for us. Is there some trick to roll the\n> > records from one partition to another? Or is the only way just a SELECT\n> INTO\n> > followed by a DELETE?\n>\n> That's pretty much it. What I did was to create the new month table\n> and day tables, alter my triggers to reflect this, then move the data\n> with insert into / select from query for each old day partition. Then\n> once their data is moved you can just drop them. Since you changed the\n> triggers first those tables are no long taking input so it's usually\n> safe to drop them now.\n>\n\nIt would be nice if there was just a \"move command\", but that seems like\nthe type of model that we want and we'll probably move to that.\n\nOn a semi-related note, I was trying to move from the single large table to\nthe partitions and doing INSERT INTO SELECT * FROM WHERE ... was running\nvery slow (I believe because of the same index issue that we've been\nrunning into), so then I tried creating a BEFORE INSERT trigger that was\nworking and using pg_restore on an -Fc dump. The documentation says that\ntriggers are executed as part of a COPY FROM (\nhttp://www.postgresql.org/docs/8.4/static/sql-copy.html ), but it doesn't\nappear that the trigger was honored because all of the data was put into\nthe base table and all of the partitions are empty.\n\nIs there a way that I can run pg_restore that will properly honor the\ntrigger? Or do I just have to create a new INSERTs dump?\n\nThanks,\nDave\n\nOn Thu, Dec 19, 2013 at 10:27 AM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Dec 19, 2013 at 9:53 AM, Dave Johansen <[email protected]> wrote:\n\n>>\n> On Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe <[email protected]>\n> wrote:\n>> I'll add that you can use assymetric partitioning if you tend to do a\n>> lot of more fine grained queries on recent data and more big roll up\n>> on older ones. I.e. partition by month except for the last 30 days, do\n>> it by day etc. Then at the end of the month roll all the days into a\n>> month partition and delete them.\n>\n> This sounds like a great solution for us. Is there some trick to roll the\n> records from one partition to another? Or is the only way just a SELECT INTO\n> followed by a DELETE?\n\nThat's pretty much it. What I did was to create the new month table\nand day tables, alter my triggers to reflect this, then move the data\nwith insert into / select from query for each old day partition. Then\nonce their data is moved you can just drop them. Since you changed the\ntriggers first those tables are no long taking input so it's usually\nsafe to drop them now.\nIt would be nice if there was just a \"move command\", but that seems like the type of model that we want and we'll probably move to that.\nOn a semi-related note, I was trying to move from the single large table to the partitions and doing INSERT INTO SELECT * FROM WHERE ... was running very slow (I believe because of the same index issue that we've been running into), so then I tried creating a BEFORE INSERT trigger that was working and using pg_restore on an -Fc dump. The documentation says that triggers are executed as part of a COPY FROM ( http://www.postgresql.org/docs/8.4/static/sql-copy.html ), but it doesn't appear that the trigger was honored because all of the data was put into the base table and all of the partitions are empty.\nIs there a way that I can run pg_restore that will properly honor the trigger? Or do I just have to create a new INSERTs dump?Thanks,Dave", "msg_date": "Fri, 20 Dec 2013 08:52:51 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "Dave Johansen escribi�:\n> On Thu, Dec 19, 2013 at 10:27 AM, Scott Marlowe <[email protected]>wrote:\n\n> > That's pretty much it. What I did was to create the new month table\n> > and day tables, alter my triggers to reflect this, then move the data\n> > with insert into / select from query for each old day partition. Then\n> > once their data is moved you can just drop them. Since you changed the\n> > triggers first those tables are no long taking input so it's usually\n> > safe to drop them now.\n> \n> It would be nice if there was just a \"move command\", but that seems like\n> the type of model that we want and we'll probably move to that.\n\nEh. Why can't you just do something like\n\nWITH moved AS (\n\tDELETE FROM src WHERE ..\n\tRETURNING *\n) INSERT INTO dst SELECT * FROM moved;\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Dec 2013 12:59:54 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On Fri, Dec 20, 2013 at 7:52 AM, Dave Johansen <[email protected]>wrote:\n\n> It would be nice if there was just a \"move command\", but that seems like\n> the type of model that we want and we'll probably move to that.\n\n\nI haven't been following this thread, but this comment caught my eye. Are\nyou after the \"NO INHERIT\" command?\nhttp://www.postgresql.org/docs/current/static/sql-altertable.html Search\nfor the \"NO INHERIT\" clause -- it will allow you to detach a child table\nfrom an inherited parent which can then archive or copy into another table.\nInserting into the rolled-up partition was already mentioned upthread I see.\n\nOn Fri, Dec 20, 2013 at 7:52 AM, Dave Johansen <[email protected]> wrote:\nIt would be nice if there was just a \"move command\", but that seems like the type of model that we want and we'll probably move to that.\nI haven't been following this thread, but this comment caught my eye. Are you after the \"NO INHERIT\" command?http://www.postgresql.org/docs/current/static/sql-altertable.html Search for the \"NO INHERIT\" clause -- it will allow you to detach a child table from an inherited parent which can then archive or copy into another table. Inserting into the rolled-up partition was already mentioned upthread I see.", "msg_date": "Fri, 20 Dec 2013 08:07:22 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On 12/20/2013 09:59 AM, Alvaro Herrera wrote:\n\n> WITH moved AS (\n> \tDELETE FROM src WHERE ..\n> \tRETURNING *\n> ) INSERT INTO dst SELECT * FROM moved;\n\nI know that's effectively an atomic action, but I'd feel a lot more \ncomfortable reversing that logic so the delete is based on the results \nof the insert.\n\nWITH saved AS (\n INSERT INTO dst\n SELECT * FROM src WHERE ...\n RETURNING *\n)\nDELETE FROM src\n WHERE ...;\n\nI'll admit yours is cleaner, though. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Dec 2013 10:18:21 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On Fri, Dec 20, 2013 at 9:18 AM, Shaun Thomas <[email protected]>wrote:\n\n> On 12/20/2013 09:59 AM, Alvaro Herrera wrote:\n>\n> WITH moved AS (\n>> DELETE FROM src WHERE ..\n>> RETURNING *\n>> ) INSERT INTO dst SELECT * FROM moved;\n>>\n>\n> I know that's effectively an atomic action, but I'd feel a lot more\n> comfortable reversing that logic so the delete is based on the results of\n> the insert.\n>\n> WITH saved AS (\n> INSERT INTO dst\n> SELECT * FROM src WHERE ...\n> RETURNING *\n> )\n> DELETE FROM src\n> WHERE ...;\n>\n> I'll admit yours is cleaner, though. :)\n>\n\nThat is a good idea. I didn't even realize that there was such a command,\nso I'll definitely use those.\n\nOn Fri, Dec 20, 2013 at 9:18 AM, Shaun Thomas <[email protected]> wrote:\nOn 12/20/2013 09:59 AM, Alvaro Herrera wrote:\n\n\nWITH moved AS (\n        DELETE FROM src WHERE ..\n        RETURNING *\n) INSERT INTO dst SELECT * FROM moved;\n\n\nI know that's effectively an atomic action, but I'd feel a lot more comfortable reversing that logic so the delete is based on the results of the insert.\n\nWITH saved AS (\n    INSERT INTO dst\n    SELECT * FROM src WHERE ...\n    RETURNING *\n)\nDELETE FROM src\n WHERE ...;\n\nI'll admit yours is cleaner, though. :)That is a good idea. I didn't even realize that there was such a command, so I'll definitely use those.", "msg_date": "Fri, 20 Dec 2013 09:23:07 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On Fri, Dec 20, 2013 at 8:52 AM, Dave Johansen <[email protected]>wrote:\n\n> On Thu, Dec 19, 2013 at 10:27 AM, Scott Marlowe <[email protected]>wrote:\n>\n>> On Thu, Dec 19, 2013 at 9:53 AM, Dave Johansen <[email protected]>\n>> wrote:\n>> >>\n>> > On Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe <[email protected]>\n>> > wrote:\n>> >> I'll add that you can use assymetric partitioning if you tend to do a\n>> >> lot of more fine grained queries on recent data and more big roll up\n>> >> on older ones. I.e. partition by month except for the last 30 days, do\n>> >> it by day etc. Then at the end of the month roll all the days into a\n>> >> month partition and delete them.\n>> >\n>> > This sounds like a great solution for us. Is there some trick to roll\n>> the\n>> > records from one partition to another? Or is the only way just a SELECT\n>> INTO\n>> > followed by a DELETE?\n>>\n>> That's pretty much it. What I did was to create the new month table\n>> and day tables, alter my triggers to reflect this, then move the data\n>> with insert into / select from query for each old day partition. Then\n>> once their data is moved you can just drop them. Since you changed the\n>> triggers first those tables are no long taking input so it's usually\n>> safe to drop them now.\n>>\n>\n> It would be nice if there was just a \"move command\", but that seems like\n> the type of model that we want and we'll probably move to that.\n>\n> On a semi-related note, I was trying to move from the single large table\n> to the partitions and doing INSERT INTO SELECT * FROM WHERE ... was running\n> very slow (I believe because of the same index issue that we've been\n> running into), so then I tried creating a BEFORE INSERT trigger that was\n> working and using pg_restore on an -Fc dump. The documentation says that\n> triggers are executed as part of a COPY FROM (\n> http://www.postgresql.org/docs/8.4/static/sql-copy.html ), but it doesn't\n> appear that the trigger was honored because all of the data was put into\n> the base table and all of the partitions are empty.\n>\n> Is there a way that I can run pg_restore that will properly honor the\n> trigger? Or do I just have to create a new INSERTs dump?\n>\n\nIt turns out that this was an error on my part. I was using an old script\nto do the restore and it had --disable-triggers to prevent the foreign keys\nfrom being checked and that was the actual source of my problem.\n\nOn Fri, Dec 20, 2013 at 8:52 AM, Dave Johansen <[email protected]> wrote:\nOn Thu, Dec 19, 2013 at 10:27 AM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Dec 19, 2013 at 9:53 AM, Dave Johansen <[email protected]> wrote:\n\n\n>>\n> On Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe <[email protected]>\n> wrote:\n>> I'll add that you can use assymetric partitioning if you tend to do a\n>> lot of more fine grained queries on recent data and more big roll up\n>> on older ones. I.e. partition by month except for the last 30 days, do\n>> it by day etc. Then at the end of the month roll all the days into a\n>> month partition and delete them.\n>\n> This sounds like a great solution for us. Is there some trick to roll the\n> records from one partition to another? Or is the only way just a SELECT INTO\n> followed by a DELETE?\n\nThat's pretty much it. What I did was to create the new month table\nand day tables, alter my triggers to reflect this, then move the data\nwith insert into / select from query for each old day partition. Then\nonce their data is moved you can just drop them. Since you changed the\ntriggers first those tables are no long taking input so it's usually\nsafe to drop them now.\nIt would be nice if there was just a \"move command\", but that seems like the type of model that we want and we'll probably move to that.\n\nOn a semi-related note, I was trying to move from the single large table to the partitions and doing INSERT INTO SELECT * FROM WHERE ... was running very slow (I believe because of the same index issue that we've been running into), so then I tried creating a BEFORE INSERT trigger that was working and using pg_restore on an -Fc dump. The documentation says that triggers are executed as part of a COPY FROM ( http://www.postgresql.org/docs/8.4/static/sql-copy.html ), but it doesn't appear that the trigger was honored because all of the data was put into the base table and all of the partitions are empty.\nIs there a way that I can run pg_restore that will properly honor the trigger? Or do I just have to create a new INSERTs dump?\nIt turns out that this was an error on my part. I was using an old script to do the restore and it had --disable-triggers to prevent the foreign keys from being checked and that was the actual source of my problem.", "msg_date": "Fri, 20 Dec 2013 09:24:40 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "On Fri, Dec 20, 2013 at 7:59 AM, Alvaro Herrera\n<[email protected]> wrote:\n> Dave Johansen escribió:\n>> On Thu, Dec 19, 2013 at 10:27 AM, Scott Marlowe <[email protected]>wrote:\n>\n>> > That's pretty much it. What I did was to create the new month table\n>> > and day tables, alter my triggers to reflect this, then move the data\n>> > with insert into / select from query for each old day partition. Then\n>> > once their data is moved you can just drop them. Since you changed the\n>> > triggers first those tables are no long taking input so it's usually\n>> > safe to drop them now.\n>>\n>> It would be nice if there was just a \"move command\", but that seems like\n>> the type of model that we want and we'll probably move to that.\n>\n> Eh. Why can't you just do something like\n>\n> WITH moved AS (\n> DELETE FROM src WHERE ..\n> RETURNING *\n> ) INSERT INTO dst SELECT * FROM moved;\n\nAvero, I think it could be cheaper to do this like it is shown below, correct?\n\npsql dbname -c 'copy src to stdout' | \\\npsql dbname -c 'copy dst from stdin; truncate src;'\n\nDave, in case if you need to archive old partitions to compressed\nfiles out of your database you can use this tool [1]. Consult with the\nconfiguration example [2], look at the ARCHIVE_* parameters.\n\n[1] https://github.com/grayhemp/pgcookbook/blob/master/bin/archive_tables.sh\n[2] https://github.com/grayhemp/pgcookbook/blob/master/bin/config.sh.example\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 Dec 2013 12:21:53 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" }, { "msg_contents": "Sergey Konoplev escribi�:\n> On Fri, Dec 20, 2013 at 7:59 AM, Alvaro Herrera\n> <[email protected]> wrote:\n\n> > Eh. Why can't you just do something like\n> >\n> > WITH moved AS (\n> > DELETE FROM src WHERE ..\n> > RETURNING *\n> > ) INSERT INTO dst SELECT * FROM moved;\n> \n> Avero, I think it could be cheaper to do this like it is shown below, correct?\n> \n> psql dbname -c 'copy src to stdout' | \\\n> psql dbname -c 'copy dst from stdin; truncate src;'\n\nYes, if you can get rid of the old records by removing or emptying a\npartition (or de-inheriting it, as suggested elsewhere in the thread),\nthat's better than DELETE because that way you don't create dead rows to\nvacuum later.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 Dec 2013 18:29:07 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommendations for partitioning?" } ]
[ { "msg_contents": "On 12/05/2013 02:42 AM, Max wrote:\n> Hello,\n> \n> We are starting a new project to deploy a solution in cloud with the possibility to be used for 2.000+ clients. Each of this clients will use several tables to store their information (our model has about 500+ tables but there's less than 100 core table with heavy use). Also the projected ammout of information per client could be from small (few hundreds tuples/MB) to huge (few millions tuples/GB).\n> \n> One of the many questions we have is about performance of the db if we work with only one (using a ClientID to separete de clients info) or thousands of separate dbs. The management of the dbs is not a huge concert as we have an automated tool.\n\nIn addition to the excellent advice from others, I'll speak from experience:\n\nThe best model here, if you can implement it, is to implement shared\ntables for all customers, but have a way you can \"break out\" customers\nto their own database(s). This allows you to start with a single\ndatabase, but to shard out your larger customers as they grow. The\nsmall customers will always stay on the same DB.\n\nThat means you'll also treat the different customers as different DB\nconnections from day 1. That way, when you move the large customers out\nto separate servers, you don't have to change the way the app connects\nto the database.\n\nIf you can't implement shared tables, I'm going to say go for separate\ndatabases. This will mean lots of additional storage space -- the\nper-DB overhead by itself will be 100GB -- but otherwise you'll be\ngrappling with the issues involved in having a million tables, which Joe\nConway outlined. But if you don't have shared tables, your huge schema\nis always going to cause you to waste resources on the smaller customers.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 05 Dec 2013 16:01:33 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: One huge db vs many small dbs" }, { "msg_contents": "2013/12/6 Josh Berkus <[email protected]>\n\n> On 12/05/2013 02:42 AM, Max wrote:\n> > Hello,\n> >\n> > We are starting a new project to deploy a solution in cloud with the\n> possibility to be used for 2.000+ clients. Each of this clients will use\n> several tables to store their information (our model has about 500+ tables\n> but there's less than 100 core table with heavy use). Also the projected\n> ammout of information per client could be from small (few hundreds\n> tuples/MB) to huge (few millions tuples/GB).\n> >\n> > One of the many questions we have is about performance of the db if we\n> work with only one (using a ClientID to separete de clients info) or\n> thousands of separate dbs. The management of the dbs is not a huge concert\n> as we have an automated tool.\n>\n> In addition to the excellent advice from others, I'll speak from\n> experience:\n>\n> The best model here, if you can implement it, is to implement shared\n> tables for all customers, but have a way you can \"break out\" customers\n> to their own database(s). This allows you to start with a single\n> database, but to shard out your larger customers as they grow. The\n> small customers will always stay on the same DB.\n>\n> That means you'll also treat the different customers as different DB\n> connections from day 1. That way, when you move the large customers out\n> to separate servers, you don't have to change the way the app connects\n> to the database.\n>\n> If you can't implement shared tables, I'm going to say go for separate\n> databases. This will mean lots of additional storage space -- the\n> per-DB overhead by itself will be 100GB -- but otherwise you'll be\n> grappling with the issues involved in having a million tables, which Joe\n> Conway outlined. But if you don't have shared tables, your huge schema\n> is always going to cause you to waste resources on the smaller customers.\n>\n\n\n+1\n\nPavel\n\n\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2013/12/6 Josh Berkus <[email protected]>\n\nOn 12/05/2013 02:42 AM, Max wrote:\n> Hello,\n>\n> We are starting a new project to deploy a solution in cloud with the possibility to be used for 2.000+ clients. Each of this clients will use several tables to store their information (our model has about 500+ tables but there's less than 100 core table with heavy use). Also the projected ammout of information per client could be from small (few hundreds tuples/MB) to huge (few millions tuples/GB).\n\n\n>\n> One of the many questions we have is about performance of the db if we work with only one (using a ClientID to separete de clients info) or thousands of separate dbs. The management of the dbs is not a huge concert as we have an automated tool.\n\nIn addition to the excellent advice from others, I'll speak from experience:\n\nThe best model here, if you can implement it, is to implement shared\ntables for all customers, but have a way you can \"break out\" customers\nto their own database(s).  This allows you to start with a single\ndatabase, but to shard out your larger customers as they grow.  The\nsmall customers will always stay on the same DB.\n\nThat means you'll also treat the different customers as different DB\nconnections from day 1.  That way, when you move the large customers out\nto separate servers, you don't have to change the way the app connects\nto the database.\n\nIf you can't implement shared tables, I'm going to say go for separate\ndatabases.  This will mean lots of additional storage space -- the\nper-DB overhead by itself will be 100GB -- but otherwise you'll be\ngrappling with the issues involved in having a million tables, which Joe\nConway outlined.  But if you don't have shared tables, your huge schema\nis always going to cause you to waste resources on the smaller customers.+1Pavel \n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 6 Dec 2013 06:58:36 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: One huge db vs many small dbs" } ]
[ { "msg_contents": "Hi\n\nI'm just trying about PostgreSQL, I create a database \"test\" with a table\n\"t1\":\n\ntest=> \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers\n---------+---------+-----------------------------------------------------\n col_id | integer | not null default nextval('t1_col_id_seq'::regclass)\n col_int | integer |\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (col_id)\n \"t1_col_int_idx\" btree (col_int)\n\nThere are 10001000 rows in that table, and basically col_int of each row\nare filled with random data within range [0,1024].\n\nOne strange thing I found is that:\n\ntest=> select distinct col_int from t1;\nTime: 1258.627 ms\ntest=> select distinct col_int from t1;\nTime: 1264.667 ms\ntest=> select distinct col_int from t1;\nTime: 1261.805 ms\n\nIf I use \"group by\":\n\ntest=> select distinct col_int from t1 group by col_int;\nTime: 1180.617 ms\ntest=> select distinct col_int from t1 group by col_int;\nTime: 1179.849 ms\ntest=> select distinct col_int from t1 group by col_int;\nTime: 1177.936 ms\n\nSo the performance difference is not very large.\nBut when I do that:\n\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 row)\n\nTime: 7367.476 ms\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 row)\n\nTime: 6946.233 ms\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 row)\n\nTime: 7386.969 ms\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 row)\n\n\nThe speed is straightly worse! But it's doesn't make sense. Since we can\njust make a temporary table or subquery and count for all rows. The\nperformance should be almost the same.\nSo do you have any idea about this? Or maybe PostgreSQL's query planner can\nbe improved for this kinds of query?\n\nBy the way, if I use a subquery to replace the above query, here's what I\ngot:\n\ntest=> select count(*) from (select distinct col_int from t1) as tmp;\n count\n-------\n 1025\n(1 row)\n\nTime: 1267.468 ms\ntest=> select count(*) from (select distinct col_int from t1) as tmp;\n count\n-------\n 1025\n(1 row)\n\nTime: 1257.327 ms\ntest=> select count(*) from (select distinct col_int from t1) as tmp;\n count\n-------\n 1025\n(1 row)\n\nTime: 1258.189 ms\n\n\nOK, this workaround works. But I just think if postgres can improve with\nthis kind of query, it will be better. Also I'm not sure about other\nsimilar scenarios that will cause similar problem.\n\nThe following is the output of \"explain analyze ...\":\ntest=> explain analyze select distinct col_int from t1;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=169268.05..169278.30 rows=1025 width=4) (actual\ntime=39034.653..39037.239 rows=1025 loops=1)\n -> Seq Scan on t1 (cost=0.00..144265.04 rows=10001204 width=4) (actual\ntime=0.041..19619.931 rows=10001000 loops=1)\n Total runtime: 39039.136 ms\n(3 rows)\n\nTime: 39103.622 ms\ntest=> explain analyze select distinct col_int from t1 group by col_int;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=169280.86..169291.11 rows=1025 width=4) (actual\ntime=39062.417..39064.882 rows=1025 loops=1)\n -> HashAggregate (cost=169268.05..169278.30 rows=1025 width=4) (actual\ntime=39058.136..39060.303 rows=1025 loops=1)\n -> Seq Scan on t1 (cost=0.00..144265.04 rows=10001204 width=4)\n(actual time=0.024..19439.482 rows=10001000 loops=1)\n Total runtime: 39066.896 ms\n(4 rows)\n\nTime: 39067.198 ms\ntest=> explain analyze select count(distinct col_int) from t1;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=169268.05..169268.06 rows=1 width=4) (actual\ntime=45994.120..45994.123 rows=1 loops=1)\n -> Seq Scan on t1 (cost=0.00..144265.04 rows=10001204 width=4) (actual\ntime=0.025..19599.950 rows=10001000 loops=1)\n Total runtime: 45994.154 ms\n(3 rows)\n\nTime: 45994.419 ms\n\ntest=> explain analyze select count(*) from (select distinct col_int from\nt1) as tmp;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=169291.11..169291.12 rows=1 width=0) (actual\ntime=39050.598..39050.600 rows=1 loops=1)\n -> HashAggregate (cost=169268.05..169278.30 rows=1025 width=4) (actual\ntime=39046.814..39048.742 rows=1025 loops=1)\n -> Seq Scan on t1 (cost=0.00..144265.04 rows=10001204 width=4)\n(actual time=0.035..19616.631 rows=10001000 loops=1)\n Total runtime: 39050.634 ms\n(4 rows)\n\nTime: 39050.896 ms\n\nP.S. I have already use \"analyze verbose t1;\" several times so the database\nshould already be optimized. But I'm just new to PostgreSQL.\n\nThe environment I use is:\nPostgreSQL 9.3.1 (postgresql-9.3 9.3.1-1.pgdg12.4+1) on local machine\n(actually a vbox VM, but when I try the test, I didn't run something very\ndifferent or some heavy program for all queries. And the result seems\nconsistent.)\nUbuntu 12.04 LTS\nI followed the installation step in Quickstart section of\nhttp://wiki.postgresql.org/wiki/Apt\n\nbest regards,\njacket41142\n\nHiI'm just trying about PostgreSQL, I create a database \"test\" with a table \"t1\":test=> \\d t1\n                            Table \"public.t1\" Column  |  Type   |                      Modifiers                      ---------+---------+----------------------------------------------------- col_id  | integer | not null default nextval('t1_col_id_seq'::regclass)\n col_int | integer | Indexes:    \"t1_pkey\" PRIMARY KEY, btree (col_id)    \"t1_col_int_idx\" btree (col_int)There are 10001000 rows in that table, and basically col_int of each row are filled with random data within range [0,1024].\nOne strange thing I found is that:test=> select distinct col_int from t1;Time: 1258.627 mstest=> select distinct col_int from t1;Time: 1264.667 mstest=> select distinct col_int from t1;\nTime: 1261.805 msIf I use \"group by\":test=> select distinct col_int from t1 group by col_int;Time: 1180.617 mstest=> select distinct col_int from t1 group by col_int;Time: 1179.849 ms\ntest=> select distinct col_int from t1 group by col_int;Time: 1177.936 msSo the performance difference is not very large.But when I do that:test=> select count(distinct col_int) from t1;\n count -------  1025(1 row)Time: 7367.476 mstest=> select count(distinct col_int) from t1; count -------  1025(1 row)Time: 6946.233 mstest=> select count(distinct col_int) from t1;\n count -------  1025(1 row)Time: 7386.969 mstest=> select count(distinct col_int) from t1; count -------  1025(1 row)The speed is straightly worse! But it's doesn't make sense. Since we can just make a temporary table or subquery and count for all rows. The performance should be almost the same.\nSo do you have any idea about this? Or maybe PostgreSQL's query planner can be improved for this kinds of query?By the way, if I use a subquery to replace the above query, here's what I got:\ntest=> select count(*) from (select distinct col_int from t1) as tmp; count -------  1025(1 row)Time: 1267.468 mstest=> select count(*) from (select distinct col_int from t1) as tmp;\n count -------  1025(1 row)Time: 1257.327 mstest=> select count(*) from (select distinct col_int from t1) as tmp; count -------  1025(1 row)Time: 1258.189 ms\nOK, this workaround works. But I just think if postgres can improve with this kind of query, it will be better. Also I'm not sure about other similar scenarios that will cause similar problem.The following is the output of \"explain analyze ...\":\ntest=> explain analyze select distinct col_int from t1;                                                       QUERY PLAN                                                        -------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=169268.05..169278.30 rows=1025 width=4) (actual time=39034.653..39037.239 rows=1025 loops=1)   ->  Seq Scan on t1  (cost=0.00..144265.04 rows=10001204 width=4) (actual time=0.041..19619.931 rows=10001000 loops=1)\n Total runtime: 39039.136 ms(3 rows)Time: 39103.622 mstest=> explain analyze select distinct col_int from t1 group by col_int;                                                          QUERY PLAN                                                           \n------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=169280.86..169291.11 rows=1025 width=4) (actual time=39062.417..39064.882 rows=1025 loops=1)\n   ->  HashAggregate  (cost=169268.05..169278.30 rows=1025 width=4) (actual time=39058.136..39060.303 rows=1025 loops=1)         ->  Seq Scan on t1  (cost=0.00..144265.04 rows=10001204 width=4) (actual time=0.024..19439.482 rows=10001000 loops=1)\n Total runtime: 39066.896 ms(4 rows)Time: 39067.198 mstest=> explain analyze select count(distinct col_int) from t1;                                                       QUERY PLAN                                                        \n------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=169268.05..169268.06 rows=1 width=4) (actual time=45994.120..45994.123 rows=1 loops=1)\n   ->  Seq Scan on t1  (cost=0.00..144265.04 rows=10001204 width=4) (actual time=0.025..19599.950 rows=10001000 loops=1) Total runtime: 45994.154 ms(3 rows)Time: 45994.419 mstest=> explain analyze select count(*) from (select distinct col_int from t1) as tmp;\n                                                          QUERY PLAN                                                           -------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=169291.11..169291.12 rows=1 width=0) (actual time=39050.598..39050.600 rows=1 loops=1)   ->  HashAggregate  (cost=169268.05..169278.30 rows=1025 width=4) (actual time=39046.814..39048.742 rows=1025 loops=1)\n         ->  Seq Scan on t1  (cost=0.00..144265.04 rows=10001204 width=4) (actual time=0.035..19616.631 rows=10001000 loops=1) Total runtime: 39050.634 ms(4 rows)Time: 39050.896 msP.S. I have already use \"analyze verbose t1;\" several times so the database should already be optimized. But I'm just new to PostgreSQL.\nThe environment I use is:PostgreSQL 9.3.1 (postgresql-9.3 9.3.1-1.pgdg12.4+1) on local machine (actually a vbox VM, but when I try the test, I didn't run something very different or some heavy program for all queries. And the result seems consistent.)\nUbuntu 12.04 LTSI followed the installation step in Quickstart section of http://wiki.postgresql.org/wiki/Apt\nbest regards,jacket41142", "msg_date": "Fri, 6 Dec 2013 10:10:26 +0800", "msg_from": "jacket41142 <[email protected]>", "msg_from_op": true, "msg_subject": "select count(distinct ...) is slower than select distinct in about 5x" } ]
[ { "msg_contents": "Hi,\n \nI want to realize a Full Text Search with the tsearch2 extension. It should find similar sentences.\n \nI used my own trigger to store the tsvector of the sentences and I created a usual gist index on them.\nI have to to use many OR statements with a low set of arguments, what heavy damages the performance.\nMy former query looked like this:\n \nSELECT strip(to_tsvector('The tiger is the largest cat species, reaching a total body length of up to 3.3 m and weighing up to 306 kg.'));\n strip \n----------------------------------------------------------------------------------------------\n '3.3' '306' 'bodi' 'cat' 'kg' 'largest' 'length' 'm' 'reach' 'speci' 'tiger' 'total' 'weigh'\n(1 row)\n \nSELECT * FROM tablename WHERE vector @@ to_tsquery('speci & tiger & total & weigh') AND vector @@ to_tsquery('largest & length & m & reach')  AND vector @@ to_tsquery('3.3 & 306 & bodi & cat & kg');\n\nAnd thats very slow.\nIs there a better solution like a functional index?\n\nThank you for your help.\n \nJanek Sendrowski\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 6 Dec 2013 15:30:57 +0100 (CET)", "msg_from": "\"Janek Sendrowski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Similarity search with the tsearch2 extension" }, { "msg_contents": "Janek Sendrowski <[email protected]> wrote:\n\n> I want to realize a Full Text Search with the tsearch2 extension.\n> It should find similar sentences.\n>  \n> I used my own trigger to store the tsvector of the sentences and\n> I created a usual gist index on them.\n> I have to to use many OR statements with a low set of arguments,\n> what heavy damages the performance.\n> My former query looked like this:\n\n> SELECT * FROM tablename\n>   WHERE vector @@ to_tsquery('speci & tiger & total & weigh')\n>     AND vector @@ to_tsquery('largest & length & m & reach')\n>     AND vector @@ to_tsquery('3.3 & 306 & bodi & cat & kg');\n\nI don't see any OR operators there.\n\n> And thats very slow.\n\nAre you sure it is using the index?\n\nAnyway, it is better to show an example, with EXPLAIN ANALYZE\noutput.  Here's mine, involving searches of War and Peace.\n\ntest=# -- Create the table.\ntest=# -- In reality, I would probably make tsv NOT NULL,\ntest=# -- but I'm keeping the example simple...\ntest=# CREATE TABLE war_and_peace\ntest-#   (\ntest(#     lineno serial PRIMARY KEY,\ntest(#     linetext text NOT NULL,\ntest(#     tsv tsvector\ntest(#   );\nCREATE TABLE\ntest=# \ntest=# -- Load from downloaded data into database.\ntest=# COPY war_and_peace (linetext)\ntest-#   FROM '/home/kgrittn/Downloads/war-and-peace.txt';\nCOPY 65007\ntest=# \ntest=# -- \"Digest\" data to lexemes.\ntest=# UPDATE war_and_peace\ntest-#   SET tsv = to_tsvector('english', linetext);\nUPDATE 65007\ntest=# \ntest=# -- Index the lexemes using GIN.\ntest=# CREATE INDEX war_and_peace_tsv\ntest-#   ON war_and_peace\ntest-#   USING gin (tsv);\nCREATE INDEX\ntest=# \ntest=# -- Make sure the database has statistics.\ntest=# VACUUM ANALYZE war_and_peace;\nVACUUM\ntest=# \ntest=# -- Find lines with \"gentlemen\".\ntest=# EXPLAIN ANALYZE\ntest-# SELECT * FROM war_and_peace\ntest-#   WHERE tsv @@ to_tsquery('english', 'gentlemen');\n                                                         QUERY PLAN                                                          \n-----------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on war_and_peace  (cost=12.52..240.18 rows=67 width=115) (actual time=0.058..0.130 rows=84 loops=1)\n   Recheck Cond: (tsv @@ '''gentlemen'''::tsquery)\n   ->  Bitmap Index Scan on war_and_peace_tsv  (cost=0.00..12.50 rows=67 width=0) (actual time=0.045..0.045 rows=84 loops=1)\n         Index Cond: (tsv @@ '''gentlemen'''::tsquery)\n Total runtime: 0.160 ms\n(5 rows)\n\ntest=# \ntest=# -- Find lines with \"ladies\".\ntest=# EXPLAIN ANALYZE\ntest-# SELECT * FROM war_and_peace\ntest-#   WHERE tsv @@ to_tsquery('english', 'ladies');\n                                                          QUERY PLAN                                                           \n-------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on war_and_peace  (cost=13.39..547.24 rows=180 width=115) (actual time=0.062..0.215 rows=184 loops=1)\n   Recheck Cond: (tsv @@ '''ladi'''::tsquery)\n   ->  Bitmap Index Scan on war_and_peace_tsv  (cost=0.00..13.35 rows=180 width=0) (actual time=0.043..0.043 rows=184 loops=1)\n         Index Cond: (tsv @@ '''ladi'''::tsquery)\n Total runtime: 0.247 ms\n(5 rows)\n\ntest=# \ntest=# -- Find lines with \"ladies\" and \"gentlemen\".\ntest=# EXPLAIN ANALYZE\ntest-# SELECT * FROM war_and_peace\ntest-#   WHERE tsv @@ to_tsquery('english', 'ladies & gentlemen');\n                                                        QUERY PLAN                                                         \n---------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on war_and_peace  (cost=20.00..24.01 rows=1 width=115) (actual time=0.062..0.063 rows=1 loops=1)\n   Recheck Cond: (tsv @@ '''ladi'' & ''gentlemen'''::tsquery)\n   ->  Bitmap Index Scan on war_and_peace_tsv  (cost=0.00..20.00 rows=1 width=0) (actual time=0.057..0.057 rows=1 loops=1)\n         Index Cond: (tsv @@ '''ladi'' & ''gentlemen'''::tsquery)\n Total runtime: 0.090 ms\n(5 rows)\n\ntest=# \ntest=# -- Find lines with (\"ladies\" and \"gentlemen\") and (\"provinces\" and \"distance\").\ntest=# EXPLAIN ANALYZE\ntest-# SELECT * FROM war_and_peace\ntest-#   WHERE tsv @@ to_tsquery('english', 'ladies & gentlemen')\ntest-#     AND tsv @@ to_tsquery('english', 'provinces & distance');\n                                                        QUERY PLAN                                                         \n---------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on war_and_peace  (cost=36.00..40.02 rows=1 width=115) (actual time=0.100..0.100 rows=1 loops=1)\n   Recheck Cond: ((tsv @@ '''ladi'' & ''gentlemen'''::tsquery) AND (tsv @@ '''provinc'' & ''distanc'''::tsquery))\n   ->  Bitmap Index Scan on war_and_peace_tsv  (cost=0.00..36.00 rows=1 width=0) (actual time=0.095..0.095 rows=1 loops=1)\n         Index Cond: ((tsv @@ '''ladi'' & ''gentlemen'''::tsquery) AND (tsv @@ '''provinc'' & ''distanc'''::tsquery))\n Total runtime: 0.130 ms\n(5 rows)\n\ntest=# \ntest=# -- Find lines with (\"ladies\" or \"gentlemen\") and (\"provinces\" or \"distance\").\ntest=# EXPLAIN ANALYZE\ntest-# SELECT * FROM war_and_peace\ntest-#   WHERE tsv @@ to_tsquery('english', 'ladies | gentlemen')\ntest-#     AND tsv @@ to_tsquery('english', 'provinces | distance');\n                                                        QUERY PLAN                                                         \n---------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on war_and_peace  (cost=36.00..40.02 rows=1 width=115) (actual time=0.043..0.043 rows=1 loops=1)\n   Recheck Cond: ((tsv @@ '''ladi'' | ''gentlemen'''::tsquery) AND (tsv @@ '''provinc'' | ''distanc'''::tsquery))\n   ->  Bitmap Index Scan on war_and_peace_tsv  (cost=0.00..36.00 rows=1 width=0) (actual time=0.042..0.042 rows=1 loops=1)\n         Index Cond: ((tsv @@ '''ladi'' | ''gentlemen'''::tsquery) AND (tsv @@ '''provinc'' | ''distanc'''::tsquery))\n Total runtime: 0.056 ms\n(5 rows)\n\ntest=# \ntest=# -- Find lines with (\"ladies\" and \"gentlemen\") or (\"provinces\" and \"distance\").\ntest=# EXPLAIN ANALYZE\ntest-# SELECT * FROM war_and_peace\ntest-#   WHERE tsv @@ to_tsquery('english', 'ladies & gentlemen')\ntest-#      OR tsv @@ to_tsquery('english', 'provinces & distance');\n                                                           QUERY PLAN                                                            \n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on war_and_peace  (cost=40.00..44.02 rows=1 width=115) (actual time=0.080..0.080 rows=1 loops=1)\n   Recheck Cond: ((tsv @@ '''ladi'' & ''gentlemen'''::tsquery) OR (tsv @@ '''provinc'' & ''distanc'''::tsquery))\n   ->  BitmapOr  (cost=40.00..40.00 rows=1 width=0) (actual time=0.076..0.076 rows=0 loops=1)\n         ->  Bitmap Index Scan on war_and_peace_tsv  (cost=0.00..20.00 rows=1 width=0) (actual time=0.052..0.052 rows=1 loops=1)\n               Index Cond: (tsv @@ '''ladi'' & ''gentlemen'''::tsquery)\n         ->  Bitmap Index Scan on war_and_peace_tsv  (cost=0.00..20.00 rows=1 width=0) (actual time=0.024..0.024 rows=1 loops=1)\n               Index Cond: (tsv @@ '''provinc'' & ''distanc'''::tsquery)\n Total runtime: 0.116 ms\n(8 rows)\n\nCan you provide a similar example which slows the slowness you report?\n                                                                                                                                                                                                                                                                               \n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 6 Dec 2013 06:54:35 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Similarity search with the tsearch2 extension" }, { "msg_contents": "Sorry, I used AND-statements instead of OR-statement in the example.\nI notices that gin is much faster than gist, but I don't know why.\n\nThe query gets slow, because there are many non-stop words which appear very often in my sentences, like in 3% of all the sentences.\nDo you think it could be worth it to filter the words, which appears that often and declare them as stop-words.\nHow would you split a sentence with let's say 10 non stop words to provide a performed similarity search?\n \nThere's still the problem with very short sentences. An partiel index on them with the trigram search might be the solution.\nThe pg_trgm module is far to slow for bigger setences, like you showed.\n \nI thought I'll build a few partiel indexes on the string length, to enhance the performance.\nDo you know some more improvements?\n \nJanek Sendrowki\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Fri, 6 Dec 2013 17:21:13 +0100 (CET)", "msg_from": "\"Janek Sendrowski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Similarity search with the tsearch2 extension" } ]
[ { "msg_contents": "hi,\nRegistered with PostgreSQL Help Forum to identify and resolve the Postgres\nDB performance issue, received suggestions but could not improve the\nspeed/response time. Please help.\n\nDetails:\nPostgres Version 9.3.1\nServer configuration:\nProcessor: 2 x Intel Quad core E5620 @ 2.40GHz\nRAM: 16 GB\n\nPostgres configuration:\nEffective cache size = 10 GB\nshared Buffer = 1250 MB\nrandom page cost = 4\n\nTable size = 60 GB\nNumber of records = 44 million\nCarried out Vacuum Analyze after inserting new records and also after\ncreating Index,\n6 months data, every month around 10 GB will get added. Expecting good\nperformance with 3 years data.\nDB Will be used for Reporting/Read, will not be used for transaction. Daily\nrecords will be inserted through bulk insertion every day.\n\nTable schema:\n Table \"public.detailed_report\"\n Column | Type | Modifiers\n-------------------------------+----------------------------+-----------\n group_id | character varying(50) | not null\n client | character varying(50) |\n gateway | character varying(50) |\n call_id | character varying(120) | not null\n parent_call_id | character varying(120) |\n start_time | timestamp with time zone | not null\n connect_time | timestamp with time zone |\n end_time | timestamp with time zone |\n duration | integer |\n source | character varying(50) |\n source_alias | character varying(50) |\n dest_in_number | character varying(50) |\n dest_out_number | character varying(50) |\n bp_code_pay | character varying[] |\n billed_duration_pay | integer[] |\n rate_pay | character varying[] |\n rate_effective_date_pay | timestamp with time zone[] |\n type_value_pay | character varying[] |\n slab_time_pay | character varying[] |\n pin_pay | bigint[] |\n amount_pay | double precision[] |\n adjusted_pin_pay | bigint[] |\n adjusted_amount_pay | double precision[] |\n call_amount_pay | double precision |\n country_code_pay | character varying[] |\n country_desc_pay | character varying[] |\n master_country_code | character varying(15) |\n master_country_desc | character varying(100) |\n bp_code_recv | character varying[] |\n billed_duration_recv | integer[] |\n rate_recv | character varying[] |\n rate_effective_date_recv | timestamp with time zone[] |\n type_value_recv | character varying[] |\n slab_time_recv | character varying[] |\n pin_recv | bigint[] |\n amount_recv | double precision[] |\n adjusted_pin_recv | bigint[] |\n adjusted_amount_recv | double precision[] |\n call_amount_recv | double precision |\n country_code_recv | character varying[] |\n country_desc_recv | character varying[] |\n subscriber_type | character varying(50) |\n pdd | smallint |\n disconnect_reason | character varying(200) |\n source_ip | character varying(20) |\n dest_ip | character varying(20) |\n caller_hop | character varying(20) |\n callee_hop | character varying(20) |\n caller_received_from_hop | character varying(20) |\n callee_sent_to_hop | character varying(20) |\n caller_media_ip_port | character varying(25) |\n callee_media_ip_port | character varying(25) |\n caller_original_media_ip_port | character varying(25) |\n callee_original_media_ip_port | character varying(25) |\n switch_ip | character varying(20) |\n call_shop_amount_paid | boolean |\n version | character varying |\n call_duration_pay | integer |\n call_duration_recv | integer |\n audio_codec | character varying(5) |\n video_codec | character varying(5) |\n shadow_amount_recv | double precision |\n shadow_amount_pay | double precision |\n pulse_applied_recv | character varying(50) |\n pulse_applied_pay | character varying(50) |\n\nIndex, multi column, 3 columns, matches exactly with query where condition\n\"endtime_groupid_client_tsidx_detail_report\" btree (end_time DESC,\ngroup_id, client), tablespace \"indexspace\" which exactly matches with\n'where' condition,\n\" WHERE end_time>='2013-05-01 00:00' and end_time<'2013-07-01 00:00' and\ngroup_id='admin' and client ='CHOICE' GROUP by client, gateway;\"\nIndex on a separate tablespace on another hard disk.\n\nQuery:\nEXPLAIN (analyze, buffers) SELECT text(client) as client, text(gateway) as\ngateway,count(*)::bigint as total_calls, (avg(duration)/1000.0)\n::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd,\nsum(call_duration_recv)/1000.0 as duration_recv,\nsum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\ncall_amount_recv, sum(call_amount_pay) as call_amount_\npay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and\nend_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE' GROUP\nby client, gateway ORDER BY call_amount_recv DESC;\nQUERY PLAN\n------------------------------------------------------\nSort (cost=3422863.06..3422868.69 rows=2254 width=44) (actual\ntime=137852.474..137852.474 rows=5 loops=1)\nSort Key: (sum(call_amount_recv))\nSort Method: quicksort Memory: 25kB\nBuffers: shared read=2491664\n-> HashAggregate (cost=3422664.28..3422737.53 rows=2254 width=44) (actual\ntime=137852.402..137852.454 rows=5 loops=1)\nBuffers: shared read=2491664\n-> Bitmap Heap Scan on detailed_report (cost=644828.11..3399506.87\nrows=1029218 width=44) (actual time=4499.558..125443.122 rows=5248227\nloops=1)\nRecheck Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time\nzone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time zone)\nAND ((group_id)::text = 'adm\nin'::text) AND ((client)::text = 'CHOICE'::text))\nBuffers: shared read=2491664\n-> Bitmap Index Scan on endtime_groupid_client_tsidx_detail_report\n(cost=0.00..644570.81 rows=1029218 width=0) (actual time=3418.754..3418.754\nrows=5248227 loops=1)\nIndex Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time\nzone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time zone)\nAND ((group_id)::text =\n'admin'::text) AND ((client)::text = 'CHOICE'::text))\nBuffers: shared read=95055\nTotal runtime: *137868.946 ms*\n(13 rows)\n\nChecked by removing ORDER BY, but no improvement.\n\nBy increasing random_page_cost to 25, the query gets executed\nsequentially,Seq Scan on detailed_report, time taken is comparatively\nbetter than Indexed scan. But I am not preferring because when the data\ngrows the sequential scan performance will come down.\n\ncarried out 3 more set of tests:\n1. Index on columns\n2. multiple column index, with 2 columns\n3. multiple column index, with three columns\n\nTest Case 1:\n************\nindexes :\n1)client\n2)group_id\n3)gateway\n4)end_time\n\n\"det_rep_pkey\" PRIMARY KEY, btree (group_id, call_id, start_time)\n\"client_detailed_report_idx\" btree (client), tablespace \"indexspace\"\n\"end_time_detailed_report_idx\" btree (end_time), tablespace \"indexspace\"\n\"gateway_detailed_report_idx\" btree (gateway), tablespace \"indexspace\"\n\"group_id_detailed_report_idx\" btree (group_id), tablespace \"indexspace\"\n\ntestdb=# EXPLAIN (analyze,buffers,verbose)SELECT text(client) as client,\ntext(gateway) as gateway,count(*)::bigint as total_calls,\n(avg(duration)/1000.0) ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2)\nas pdd, sum(call_duration_recv)/1000.0 as duration_recv,\nsum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\ncall_amount_recv, sum(call_amount_pay) as call_amount_pay FROM\ndetailed_report WHERE end_time>='2013-05-01 00:00' and end_time<'2013-07-01\n00:00' and group_id='admin' and client ='CHOICE' GROUP by client, gateway\nORDER BY call_amount_recv DESC;\nQUERY PLAN\n\nSort (cost=3510106.93..3510112.25 rows=2127 width=44) (actual\ntime=148557.599..148557.599 rows=5 loops=1)\nOutput: ((client)::text), ((gateway)::text), (count(*)), (((avg(duration) /\n1000.0))::numeric(10,2)), ((avg(pdd))::numeric(10,2)),\n(((sum(call_duration_recv))::numeric / 1000.0)), (((sum(c\nall_duration_pay))::numeric / 1000.0)), (sum(call_amount_recv)),\n(sum(call_amount_pay)), client, gateway\nSort Key: (sum(detailed_report.call_amount_recv))\nSort Method: quicksort Memory: 25kB\nBuffers: shared hit=69 read=2505035\n-> HashAggregate (cost=3509920.24..3509989.37 rows=2127 width=44) (actual\ntime=148557.556..148557.581 rows=5 loops=1)\nOutput: (client)::text, (gateway)::text, count(*), ((avg(duration) /\n1000.0))::numeric(10,2), (avg(pdd))::numeric(10,2),\n((sum(call_duration_recv))::numeric / 1000.0), ((sum(call_dur\nation_pay))::numeric / 1000.0), sum(call_amount_recv),\nsum(call_amount_pay), client, gateway\nBuffers: shared hit=69 read=2505035\n-> Bitmap Heap Scan on public.detailed_report (cost=832774.93..3487872.62\nrows=979894 width=44) (actual time=14257.148..135355.676 rows=5248227\nloops=1)\nOutput: group_id, client, gateway, call_id, parent_call_id, start_time,\nconnect_time, end_time, duration, source, source_alias, dest_in_number,\ndest_out_number, bp_code_pay, bi\nlled_duration_pay, rate_pay, rate_effective_date_pay, type_value_pay,\nslab_time_pay, pin_pay, amount_pay, adjusted_pin_pay, adjusted_amount_pay,\ncall_amount_pay, country_code_pay, country_des\nc_pay, master_country_code, master_country_desc, bp_code_recv,\nbilled_duration_recv, rate_recv, rate_effective_date_recv, type_value_recv,\nslab_time_recv, pin_recv, amount_recv, adjusted_pin_\nrecv, adjusted_amount_recv, call_amount_recv, country_code_recv,\ncountry_desc_recv, subscriber_type, pdd, disconnect_reason, source_ip,\ndest_ip, caller_hop, callee_hop, caller_received_from_h\nop, callee_sent_to_hop, caller_media_ip_port, callee_media_ip_port,\ncaller_original_media_ip_port, callee_original_media_ip_port, switch_ip,\ncall_shop_amount_paid, version, call_duration_pay,\ncall_duration_recv, audio_codec, video_codec, shadow_amount_recv,\nshadow_amount_pay, pulse_applied_recv, pulse_applied_pay\nRecheck Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND\n((detailed_report.group_id)::text = 'admin'::text) AND\n(detailed_report.end_time >= '2013-05-01 00:00:00+00\n'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01\n00:00:00+00'::timestamp with time zone))\nBuffers: shared hit=69 read=2505035\n-> BitmapAnd (cost=832774.93..832774.93 rows=979894 width=0) (actual\ntime=13007.643..13007.643 rows=0 loops=1)\nBuffers: shared read=108495\n-> Bitmap Index Scan on client_detailed_report_idx (cost=0.00..172876.66\nrows=7862413 width=0) (actual time=2546.204..2546.204 rows=7840766 loops=1)\nIndex Cond: ((detailed_report.client)::text = 'CHOICE'::text)\nBuffers: shared read=21427\n-> Bitmap Index Scan on group_id_detailed_report_idx (cost=0.00..307105.20\nrows=14971818 width=0) (actual time=4265.728..4265.728 rows=14945965\nloops=1)\nIndex Cond: ((detailed_report.group_id)::text = 'admin'::text)\nBuffers: shared read=40840\n-> Bitmap Index Scan on end_time_detailed_report_idx (cost=0.00..352057.65\nrows=16790108 width=0) (actual time=3489.106..3489.106 rows=16917795\nloops=1)\nIndex Cond: ((detailed_report.end_time >= '2013-05-01\n00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time <\n'2013-07-01 00:00:00+00'::timestamp wi\nth time zone))\nBuffers: shared read=46228\nTotal runtime:* 148558.070 ms*\n(24 rows)\n\n\n\nTest Case 2:\n************\nIndexes :\n1)client\n2)group_id\n3)gateway\n4)end_time\n5)client,group_id\n\n\"det_rep_pkey\" PRIMARY KEY, btree (group_id, call_id, start_time)\n\"client_detailed_report_idx\" btree (client), tablespace \"indexspace\"\n\"clientgroupid_detailed_report_idx\" btree (client, group_id), tablespace\n\"indexspace\"\n\"end_time_detailed_report_idx\" btree (end_time), tablespace \"indexspace\"\n\"gateway_detailed_report_idx\" btree (gateway), tablespace \"indexspace\"\n\"group_id_detailed_report_idx\" btree (group_id), tablespace \"indexspace\"\n\n\ntestdb=# EXPLAIN (analyze,buffers,verbose)SELECT text(client) as client,\ntext(gateway) as gateway,count(*)::bigint as total_calls,\n(avg(duration)/1000.0) ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2)\nas pdd, sum(call_duration_recv)/1000.0 as duration_recv,\nsum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\ncall_amount_recv, sum(call_amount_pay) as call_amount_pay FROM\ndetailed_report WHERE end_time>='2013-05-01 00:00' and end_time<'2013-07-01\n00:00' and group_id='admin' and client ='CHOICE' GROUP by client, gateway\nORDER BY call_amount_recv DESC;\n\nQUERY PLAN\nSort (cost=3172381.37..3172387.11 rows=2297 width=44) (actual\ntime=132725.901..132725.901 rows=5 loops=1)\nOutput: ((client)::text), ((gateway)::text), (count(*)), (((avg(duration) /\n1000.0))::numeric(10,2)), ((avg(pdd))::numeric(10,2)),\n(((sum(call_duration_recv))::numeric / 1000.0)), (((sum(c\nall_duration_pay))::numeric / 1000.0)), (sum(call_amount_recv)),\n(sum(call_amount_pay)), client, gateway\nSort Key: (sum(detailed_report.call_amount_recv))\nSort Method: quicksort Memory: 25kB\nBuffers: shared read=2472883\n-> HashAggregate (cost=3172178.48..3172253.13 rows=2297 width=44) (actual\ntime=132725.861..132725.881 rows=5 loops=1)\nOutput: (client)::text, (gateway)::text, count(*), ((avg(duration) /\n1000.0))::numeric(10,2), (avg(pdd))::numeric(10,2),\n((sum(call_duration_recv))::numeric / 1000.0), ((sum(call_dur\nation_pay))::numeric / 1000.0), sum(call_amount_recv),\nsum(call_amount_pay), client, gateway\nBuffers: shared read=2472883\n-> Bitmap Heap Scan on public.detailed_report (cost=434121.21..3149462.57\nrows=1009596 width=44) (actual time=8257.581..120311.450 rows=5248227\nloops=1)\nOutput: group_id, client, gateway, call_id, parent_call_id, start_time,\nconnect_time, end_time, duration, source, source_alias, dest_in_number,\ndest_out_number, bp_code_pay, bi\nlled_duration_pay, rate_pay, rate_effective_date_pay, type_value_pay,\nslab_time_pay, pin_pay, amount_pay, adjusted_pin_pay, adjusted_amount_pay,\ncall_amount_pay, country_code_pay, country_des\nc_pay, master_country_code, master_country_desc, bp_code_recv,\nbilled_duration_recv, rate_recv, rate_effective_date_recv, type_value_recv,\nslab_time_recv, pin_recv, amount_recv, adjusted_pin_\nrecv, adjusted_amount_recv, call_amount_recv, country_code_recv,\ncountry_desc_recv, subscriber_type, pdd, disconnect_reason, source_ip,\ndest_ip, caller_hop, callee_hop, caller_received_from_h\nop, callee_sent_to_hop, caller_media_ip_port, callee_media_ip_port,\ncaller_original_media_ip_port, callee_original_media_ip_port, switch_ip,\ncall_shop_amount_paid, version, call_duration_pay,\ncall_duration_recv, audio_codec, video_codec, shadow_amount_recv,\nshadow_amount_pay, pulse_applied_recv, pulse_applied_pay\nRecheck Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND\n((detailed_report.group_id)::text = 'admin'::text) AND\n(detailed_report.end_time >= '2013-05-01 00:00:00+00\n'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01\n00:00:00+00'::timestamp with time zone))\nBuffers: shared read=2472883\n-> BitmapAnd (cost=434121.21..434121.21 rows=1009596 width=0) (actual\ntime=7101.419..7101.419 rows=0 loops=1)\nBuffers: shared read=76274\n-> Bitmap Index Scan on clientgroupid_detailed_report_idx\n(cost=0.00..74766.52 rows=2649396 width=0) (actual time=3066.346..3066.346\nrows=7840766 loops=1)\nIndex Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND\n((detailed_report.group_id)::text = 'admin'::text))\nBuffers: shared read=30046\n-> Bitmap Index Scan on end_time_detailed_report_idx (cost=0.00..358849.64\nrows=17114107 width=0) (actual time=2969.577..2969.577 rows=16917795\nloops=1)\nIndex Cond: ((detailed_report.end_time >= '2013-05-01\n00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time <\n'2013-07-01 00:00:00+00'::timestamp wi\nth time zone))\nBuffers: shared read=46228\nTotal runtime:* 132726.073 ms*\n(21 rows)\n\n\n\nTest Case 3:\n************\nIndexes:\nIndex :\n1)client\n2)group_id\n3)gateway\n4)end_time\n5)client,group_id\n6)client,group_id,end_time\n\n\"det_rep_pkey\" PRIMARY KEY, btree (group_id, call_id, start_time)\n\"client_detailed_report_idx\" btree (client), tablespace \"indexspace\"\n\"clientgroupid_detailed_report_idx\" btree (client, group_id), tablespace\n\"indexspace\"\n\"clientgroupidendtime_detailed_report_idx\" btree (client, group_id,\nend_time), tablespace \"indexspace\"\n\"end_time_detailed_report_idx\" btree (end_time), tablespace \"indexspace\"\n\"gateway_detailed_report_idx\" btree (gateway), tablespace \"indexspace\"\n\"group_id_detailed_report_idx\" btree (group_id), tablespace \"indexspace\"\n\n\ntestdb=# EXPLAIN (analyze, verbose) SELECT text(client) as client,\ntext(gateway) as gateway,count(*)::bigint as total_calls,\n(avg(duration)/1000.0) ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2)\nas pdd, sum(call_duration_recv)/1000.0 as duration_recv,\nsum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\ncall_amount_recv, sum(call_amount_pay) as call_amount_\npay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and\nend_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE' GROUP\nby client, gateway ORDER BY call_amount_recv DESC;\n\nQUERY PLAN\n\nSort (cost=2725603.99..2725609.46 rows=2188 width=44) (actual\ntime=137713.264..137713.265 rows=5 loops=1)\nOutput: ((client)::text), ((gateway)::text), (count(*)), (((avg(duration) /\n1000.0))::numeric(10,2)), ((avg(pdd))::numeric(10,2)),\n(((sum(call_duration_recv))::numeric / 1000.0)), (((sum(c\nall_duration_pay))::numeric / 1000.0)), (sum(call_amount_recv)),\n(sum(call_amount_pay)), client, gateway\nSort Key: (sum(detailed_report.call_amount_recv))\nSort Method: quicksort Memory: 25kB\n-> HashAggregate (cost=2725411.50..2725482.61 rows=2188 width=44) (actual\ntime=137713.192..137713.215 rows=5 loops=1)\nOutput: (client)::text, (gateway)::text, count(*), ((avg(duration) /\n1000.0))::numeric(10,2), (avg(pdd))::numeric(10,2),\n((sum(call_duration_recv))::numeric / 1000.0), ((sum(call_dur\nation_pay))::numeric / 1000.0), sum(call_amount_recv),\nsum(call_amount_pay), client, gateway\n-> Bitmap Heap Scan on public.detailed_report (cost=37356.61..2703244.88\nrows=985183 width=44) (actual time=3925.850..124647.660 rows=5248227\nloops=1)\nOutput: group_id, client, gateway, call_id, parent_call_id, start_time,\nconnect_time, end_time, duration, source, source_alias, dest_in_number,\ndest_out_number, bp_code_pay, bi\nlled_duration_pay, rate_pay, rate_effective_date_pay, type_value_pay,\nslab_time_pay, pin_pay, amount_pay, adjusted_pin_pay, adjusted_amount_pay,\ncall_amount_pay, country_code_pay, country_des\nc_pay, master_country_code, master_country_desc, bp_code_recv,\nbilled_duration_recv, rate_recv, rate_effective_date_recv, type_value_recv,\nslab_time_recv, pin_recv, amount_recv, adjusted_pin_\nrecv, adjusted_amount_recv, call_amount_recv, country_code_recv,\ncountry_desc_recv, subscriber_type, pdd, disconnect_reason, source_ip,\ndest_ip, caller_hop, callee_hop, caller_received_from_h\nop, callee_sent_to_hop, caller_media_ip_port, callee_media_ip_port,\ncaller_original_media_ip_port, callee_original_media_ip_port, switch_ip,\ncall_shop_amount_paid, version, call_duration_pay,\ncall_duration_recv, audio_codec, video_codec, shadow_amount_recv,\nshadow_amount_pay, pulse_applied_recv, pulse_applied_pay\nRecheck Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND\n((detailed_report.group_id)::text = 'admin'::text) AND\n(detailed_report.end_time >= '2013-05-01 00:00:00+00\n'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01\n00:00:00+00'::timestamp with time zone))\n-> Bitmap Index Scan on clientgroupidendtime_detailed_report_idx\n(cost=0.00..37110.31 rows=985183 width=0) (actual time=2820.150..2820.150\nrows=5248227 loops=1)\nIndex Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND\n((detailed_report.group_id)::text = 'admin'::text) AND\n(detailed_report.end_time >= '2013-05-01 00:00:0\n0+00'::timestamp with time zone) AND (detailed_report.end_time <\n'2013-07-01 00:00:00+00'::timestamp with time zone))\nTotal runtime: *137728.029 ms*\n(12 rows)\n\nTried by creating partial Index on group_id column for the value 'admin'\nand also end_time column for one month range.\n\nWith all the above experiment, could not reduce the response time, please\nsuggest.\n\nhi,Registered with PostgreSQL Help Forum to identify and resolve the Postgres DB performance issue, received suggestions but could not improve the speed/response time. Please help.\nDetails:Postgres Version 9.3.1Server configuration:\nProcessor: 2 x Intel Quad core E5620 @ 2.40GHz\nRAM: 16 GB\nPostgres configuration:\nEffective cache size = 10 GB\nshared Buffer = 1250 MB\nrandom page cost = 4\nTable size = 60 GB\nNumber of records = 44 million Carried out Vacuum Analyze after inserting new records and also after creating Index,\n6 months data, every month around 10 GB will get added. Expecting good performance with 3 years data.DB Will be used for Reporting/Read, will not be used for transaction. Daily records will be inserted through bulk insertion every day.\nTable schema:                    Table \"public.detailed_report\"            Column             |            Type            | Modifiers -------------------------------+----------------------------+-----------\n group_id                      | character varying(50)      | not null client                        | character varying(50)      |  gateway                       | character varying(50)      | \n call_id                       | character varying(120)     | not null parent_call_id                | character varying(120)     |  start_time                    | timestamp with time zone   | not null\n connect_time                  | timestamp with time zone   |  end_time                      | timestamp with time zone   |  duration                      | integer                    | \n source                        | character varying(50)      |  source_alias                  | character varying(50)      |  dest_in_number                | character varying(50)      |  dest_out_number               | character varying(50)      | \n bp_code_pay                   | character varying[]        |  billed_duration_pay           | integer[]                  |  rate_pay                      | character varying[]        | \n rate_effective_date_pay       | timestamp with time zone[] |  type_value_pay                | character varying[]        |  slab_time_pay                 | character varying[]        |  pin_pay                       | bigint[]                   | \n amount_pay                    | double precision[]         |  adjusted_pin_pay              | bigint[]                   |  adjusted_amount_pay           | double precision[]         | \n call_amount_pay               | double precision           |  country_code_pay              | character varying[]        |  country_desc_pay              | character varying[]        |  master_country_code           | character varying(15)      | \n master_country_desc           | character varying(100)     |  bp_code_recv                  | character varying[]        |  billed_duration_recv          | integer[]                  | \n rate_recv                     | character varying[]        |  rate_effective_date_recv      | timestamp with time zone[] |  type_value_recv               | character varying[]        |  slab_time_recv                | character varying[]        | \n pin_recv                      | bigint[]                   |  amount_recv                   | double precision[]         |  adjusted_pin_recv             | bigint[]                   | \n adjusted_amount_recv          | double precision[]         |  call_amount_recv              | double precision           |  country_code_recv             | character varying[]        | \n country_desc_recv             | character varying[]        |  subscriber_type               | character varying(50)      |  pdd                           | smallint                   |  disconnect_reason             | character varying(200)     | \n source_ip                     | character varying(20)      |  dest_ip                       | character varying(20)      |  caller_hop                    | character varying(20)      | \n callee_hop                    | character varying(20)      |  caller_received_from_hop      | character varying(20)      |  callee_sent_to_hop            | character varying(20)      |  caller_media_ip_port          | character varying(25)      | \n callee_media_ip_port          | character varying(25)      |  caller_original_media_ip_port | character varying(25)      |  callee_original_media_ip_port | character varying(25)      | \n switch_ip                     | character varying(20)      |  call_shop_amount_paid         | boolean                    |  version                       | character varying          |  call_duration_pay             | integer                    | \n call_duration_recv            | integer                    |  audio_codec                   | character varying(5)       |  video_codec                   | character varying(5)       | \n shadow_amount_recv            | double precision           |  shadow_amount_pay             | double precision           |  pulse_applied_recv            | character varying(50)      |  pulse_applied_pay             | character varying(50)      | \nIndex, multi column, 3 columns, matches exactly with query where condition\n\"endtime_groupid_client_tsidx_detail_report\" btree (end_time DESC, group_id, client), tablespace \"indexspace\" which exactly matches with 'where' condition,\n\" WHERE end_time>='2013-05-01 00:00' and end_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE' GROUP by client, gateway;\" \nIndex on a separate tablespace on another hard disk.\nQuery:\nEXPLAIN (analyze, buffers) SELECT text(client) as client, text(gateway) as gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0) ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd, sum(call_duration_recv)/1000.0 as duration_recv, sum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as call_amount_recv, sum(call_amount_pay) as call_amount_\npay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and end_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE' GROUP by client, gateway ORDER BY call_amount_recv DESC; \nQUERY PLAN \n------------------------------------------------------\nSort (cost=3422863.06..3422868.69 rows=2254 width=44) (actual time=137852.474..137852.474 rows=5 loops=1)\nSort Key: (sum(call_amount_recv))\nSort Method: quicksort Memory: 25kB\nBuffers: shared read=2491664\n-> HashAggregate (cost=3422664.28..3422737.53 rows=2254 width=44) (actual time=137852.402..137852.454 rows=5 loops=1)\nBuffers: shared read=2491664\n-> Bitmap Heap Scan on detailed_report (cost=644828.11..3399506.87 rows=1029218 width=44) (actual time=4499.558..125443.122 rows=5248227 loops=1)\nRecheck Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time zone) AND ((group_id)::text = 'adm\nin'::text) AND ((client)::text = 'CHOICE'::text))\nBuffers: shared read=2491664\n-> Bitmap Index Scan on endtime_groupid_client_tsidx_detail_report (cost=0.00..644570.81 rows=1029218 width=0) (actual time=3418.754..3418.754 rows=5248227 loops=1)\nIndex Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time zone) AND ((group_id)::text = \n'admin'::text) AND ((client)::text = 'CHOICE'::text))\nBuffers: shared read=95055\nTotal runtime: 137868.946 ms\n(13 rows)\nChecked by removing ORDER BY, but no improvement.\nBy increasing random_page_cost to 25, the query gets executed sequentially,Seq Scan on detailed_report, time taken is comparatively better than Indexed scan. But I am not preferring because when the data grows the sequential scan performance will come down.\ncarried out 3 more set of tests:\n1. Index on columns2. multiple column index, with 2 columns3. multiple column index, with three columnsTest Case 1:************indexes :1)client2)group_id3)gateway4)end_time\n\"det_rep_pkey\" PRIMARY KEY, btree (group_id, call_id, start_time)\"client_detailed_report_idx\" btree (client), tablespace \"indexspace\"\"end_time_detailed_report_idx\" btree (end_time), tablespace \"indexspace\"\n\"gateway_detailed_report_idx\" btree (gateway), tablespace \"indexspace\"\"group_id_detailed_report_idx\" btree (group_id), tablespace \"indexspace\"testdb=# EXPLAIN (analyze,buffers,verbose)SELECT text(client) as client, text(gateway) as gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0) ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd, sum(call_duration_recv)/1000.0 as duration_recv, sum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as call_amount_recv, sum(call_amount_pay) as call_amount_pay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and end_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE' GROUP by client, gateway ORDER BY call_amount_recv DESC; \nQUERY PLAN Sort (cost=3510106.93..3510112.25 rows=2127 width=44) (actual time=148557.599..148557.599 rows=5 loops=1)Output: ((client)::text), ((gateway)::text), (count(*)), (((avg(duration) / 1000.0))::numeric(10,2)), ((avg(pdd))::numeric(10,2)), (((sum(call_duration_recv))::numeric / 1000.0)), (((sum(c\nall_duration_pay))::numeric / 1000.0)), (sum(call_amount_recv)), (sum(call_amount_pay)), client, gatewaySort Key: (sum(detailed_report.call_amount_recv))Sort Method: quicksort Memory: 25kBBuffers: shared hit=69 read=2505035\n-> HashAggregate (cost=3509920.24..3509989.37 rows=2127 width=44) (actual time=148557.556..148557.581 rows=5 loops=1)Output: (client)::text, (gateway)::text, count(*), ((avg(duration) / 1000.0))::numeric(10,2), (avg(pdd))::numeric(10,2), ((sum(call_duration_recv))::numeric / 1000.0), ((sum(call_dur\nation_pay))::numeric / 1000.0), sum(call_amount_recv), sum(call_amount_pay), client, gatewayBuffers: shared hit=69 read=2505035-> Bitmap Heap Scan on public.detailed_report (cost=832774.93..3487872.62 rows=979894 width=44) (actual time=14257.148..135355.676 rows=5248227 loops=1)\nOutput: group_id, client, gateway, call_id, parent_call_id, start_time, connect_time, end_time, duration, source, source_alias, dest_in_number, dest_out_number, bp_code_pay, billed_duration_pay, rate_pay, rate_effective_date_pay, type_value_pay, slab_time_pay, pin_pay, amount_pay, adjusted_pin_pay, adjusted_amount_pay, call_amount_pay, country_code_pay, country_des\nc_pay, master_country_code, master_country_desc, bp_code_recv, billed_duration_recv, rate_recv, rate_effective_date_recv, type_value_recv, slab_time_recv, pin_recv, amount_recv, adjusted_pin_recv, adjusted_amount_recv, call_amount_recv, country_code_recv, country_desc_recv, subscriber_type, pdd, disconnect_reason, source_ip, dest_ip, caller_hop, callee_hop, caller_received_from_h\nop, callee_sent_to_hop, caller_media_ip_port, callee_media_ip_port, caller_original_media_ip_port, callee_original_media_ip_port, switch_ip, call_shop_amount_paid, version, call_duration_pay,call_duration_recv, audio_codec, video_codec, shadow_amount_recv, shadow_amount_pay, pulse_applied_recv, pulse_applied_pay\nRecheck Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND ((detailed_report.group_id)::text = 'admin'::text) AND (detailed_report.end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01 00:00:00+00'::timestamp with time zone))\nBuffers: shared hit=69 read=2505035-> BitmapAnd (cost=832774.93..832774.93 rows=979894 width=0) (actual time=13007.643..13007.643 rows=0 loops=1)Buffers: shared read=108495-> Bitmap Index Scan on client_detailed_report_idx (cost=0.00..172876.66 rows=7862413 width=0) (actual time=2546.204..2546.204 rows=7840766 loops=1)\nIndex Cond: ((detailed_report.client)::text = 'CHOICE'::text)Buffers: shared read=21427-> Bitmap Index Scan on group_id_detailed_report_idx (cost=0.00..307105.20 rows=14971818 width=0) (actual time=4265.728..4265.728 rows=14945965 loops=1)\nIndex Cond: ((detailed_report.group_id)::text = 'admin'::text)Buffers: shared read=40840-> Bitmap Index Scan on end_time_detailed_report_idx (cost=0.00..352057.65 rows=16790108 width=0) (actual time=3489.106..3489.106 rows=16917795 loops=1)\nIndex Cond: ((detailed_report.end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01 00:00:00+00'::timestamp with time zone))Buffers: shared read=46228\nTotal runtime: 148558.070 ms(24 rows)Test Case 2:************Indexes :1)client2)group_id3)gateway4)end_time5)client,group_id\"det_rep_pkey\" PRIMARY KEY, btree (group_id, call_id, start_time)\n\"client_detailed_report_idx\" btree (client), tablespace \"indexspace\"\"clientgroupid_detailed_report_idx\" btree (client, group_id), tablespace \"indexspace\"\"end_time_detailed_report_idx\" btree (end_time), tablespace \"indexspace\"\n\"gateway_detailed_report_idx\" btree (gateway), tablespace \"indexspace\"\"group_id_detailed_report_idx\" btree (group_id), tablespace \"indexspace\"testdb=# EXPLAIN (analyze,buffers,verbose)SELECT text(client) as client, text(gateway) as gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0) ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd, sum(call_duration_recv)/1000.0 as duration_recv, sum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as call_amount_recv, sum(call_amount_pay) as call_amount_pay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and end_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE' GROUP by client, gateway ORDER BY call_amount_recv DESC;\nQUERY PLAN Sort (cost=3172381.37..3172387.11 rows=2297 width=44) (actual time=132725.901..132725.901 rows=5 loops=1)Output: ((client)::text), ((gateway)::text), (count(*)), (((avg(duration) / 1000.0))::numeric(10,2)), ((avg(pdd))::numeric(10,2)), (((sum(call_duration_recv))::numeric / 1000.0)), (((sum(c\nall_duration_pay))::numeric / 1000.0)), (sum(call_amount_recv)), (sum(call_amount_pay)), client, gatewaySort Key: (sum(detailed_report.call_amount_recv))Sort Method: quicksort Memory: 25kBBuffers: shared read=2472883\n-> HashAggregate (cost=3172178.48..3172253.13 rows=2297 width=44) (actual time=132725.861..132725.881 rows=5 loops=1)Output: (client)::text, (gateway)::text, count(*), ((avg(duration) / 1000.0))::numeric(10,2), (avg(pdd))::numeric(10,2), ((sum(call_duration_recv))::numeric / 1000.0), ((sum(call_dur\nation_pay))::numeric / 1000.0), sum(call_amount_recv), sum(call_amount_pay), client, gatewayBuffers: shared read=2472883-> Bitmap Heap Scan on public.detailed_report (cost=434121.21..3149462.57 rows=1009596 width=44) (actual time=8257.581..120311.450 rows=5248227 loops=1)\nOutput: group_id, client, gateway, call_id, parent_call_id, start_time, connect_time, end_time, duration, source, source_alias, dest_in_number, dest_out_number, bp_code_pay, billed_duration_pay, rate_pay, rate_effective_date_pay, type_value_pay, slab_time_pay, pin_pay, amount_pay, adjusted_pin_pay, adjusted_amount_pay, call_amount_pay, country_code_pay, country_des\nc_pay, master_country_code, master_country_desc, bp_code_recv, billed_duration_recv, rate_recv, rate_effective_date_recv, type_value_recv, slab_time_recv, pin_recv, amount_recv, adjusted_pin_recv, adjusted_amount_recv, call_amount_recv, country_code_recv, country_desc_recv, subscriber_type, pdd, disconnect_reason, source_ip, dest_ip, caller_hop, callee_hop, caller_received_from_h\nop, callee_sent_to_hop, caller_media_ip_port, callee_media_ip_port, caller_original_media_ip_port, callee_original_media_ip_port, switch_ip, call_shop_amount_paid, version, call_duration_pay,call_duration_recv, audio_codec, video_codec, shadow_amount_recv, shadow_amount_pay, pulse_applied_recv, pulse_applied_pay\nRecheck Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND ((detailed_report.group_id)::text = 'admin'::text) AND (detailed_report.end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01 00:00:00+00'::timestamp with time zone))\nBuffers: shared read=2472883-> BitmapAnd (cost=434121.21..434121.21 rows=1009596 width=0) (actual time=7101.419..7101.419 rows=0 loops=1)Buffers: shared read=76274-> Bitmap Index Scan on clientgroupid_detailed_report_idx (cost=0.00..74766.52 rows=2649396 width=0) (actual time=3066.346..3066.346 rows=7840766 loops=1)\nIndex Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND ((detailed_report.group_id)::text = 'admin'::text))Buffers: shared read=30046-> Bitmap Index Scan on end_time_detailed_report_idx (cost=0.00..358849.64 rows=17114107 width=0) (actual time=2969.577..2969.577 rows=16917795 loops=1)\nIndex Cond: ((detailed_report.end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01 00:00:00+00'::timestamp with time zone))Buffers: shared read=46228\nTotal runtime: 132726.073 ms(21 rows)Test Case 3:************Indexes:Index :1)client2)group_id3)gateway4)end_time5)client,group_id6)client,group_id,end_time\n\"det_rep_pkey\" PRIMARY KEY, btree (group_id, call_id, start_time)\"client_detailed_report_idx\" btree (client), tablespace \"indexspace\"\"clientgroupid_detailed_report_idx\" btree (client, group_id), tablespace \"indexspace\"\n\"clientgroupidendtime_detailed_report_idx\" btree (client, group_id, end_time), tablespace \"indexspace\"\"end_time_detailed_report_idx\" btree (end_time), tablespace \"indexspace\"\n\"gateway_detailed_report_idx\" btree (gateway), tablespace \"indexspace\"\"group_id_detailed_report_idx\" btree (group_id), tablespace \"indexspace\"testdb=# EXPLAIN (analyze, verbose) SELECT text(client) as client, text(gateway) as gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0) ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd, sum(call_duration_recv)/1000.0 as duration_recv, sum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as call_amount_recv, sum(call_amount_pay) as call_amount_\npay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and end_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE' GROUP by client, gateway ORDER BY call_amount_recv DESC; \nQUERY PLAN Sort (cost=2725603.99..2725609.46 rows=2188 width=44) (actual time=137713.264..137713.265 rows=5 loops=1)Output: ((client)::text), ((gateway)::text), (count(*)), (((avg(duration) / 1000.0))::numeric(10,2)), ((avg(pdd))::numeric(10,2)), (((sum(call_duration_recv))::numeric / 1000.0)), (((sum(c\nall_duration_pay))::numeric / 1000.0)), (sum(call_amount_recv)), (sum(call_amount_pay)), client, gatewaySort Key: (sum(detailed_report.call_amount_recv))Sort Method: quicksort Memory: 25kB-> HashAggregate (cost=2725411.50..2725482.61 rows=2188 width=44) (actual time=137713.192..137713.215 rows=5 loops=1)\nOutput: (client)::text, (gateway)::text, count(*), ((avg(duration) / 1000.0))::numeric(10,2), (avg(pdd))::numeric(10,2), ((sum(call_duration_recv))::numeric / 1000.0), ((sum(call_duration_pay))::numeric / 1000.0), sum(call_amount_recv), sum(call_amount_pay), client, gateway\n-> Bitmap Heap Scan on public.detailed_report (cost=37356.61..2703244.88 rows=985183 width=44) (actual time=3925.850..124647.660 rows=5248227 loops=1)Output: group_id, client, gateway, call_id, parent_call_id, start_time, connect_time, end_time, duration, source, source_alias, dest_in_number, dest_out_number, bp_code_pay, bi\nlled_duration_pay, rate_pay, rate_effective_date_pay, type_value_pay, slab_time_pay, pin_pay, amount_pay, adjusted_pin_pay, adjusted_amount_pay, call_amount_pay, country_code_pay, country_desc_pay, master_country_code, master_country_desc, bp_code_recv, billed_duration_recv, rate_recv, rate_effective_date_recv, type_value_recv, slab_time_recv, pin_recv, amount_recv, adjusted_pin_\nrecv, adjusted_amount_recv, call_amount_recv, country_code_recv, country_desc_recv, subscriber_type, pdd, disconnect_reason, source_ip, dest_ip, caller_hop, callee_hop, caller_received_from_hop, callee_sent_to_hop, caller_media_ip_port, callee_media_ip_port, caller_original_media_ip_port, callee_original_media_ip_port, switch_ip, call_shop_amount_paid, version, call_duration_pay,\ncall_duration_recv, audio_codec, video_codec, shadow_amount_recv, shadow_amount_pay, pulse_applied_recv, pulse_applied_payRecheck Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND ((detailed_report.group_id)::text = 'admin'::text) AND (detailed_report.end_time >= '2013-05-01 00:00:00+00\n'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01 00:00:00+00'::timestamp with time zone))-> Bitmap Index Scan on clientgroupidendtime_detailed_report_idx (cost=0.00..37110.31 rows=985183 width=0) (actual time=2820.150..2820.150 rows=5248227 loops=1)\nIndex Cond: (((detailed_report.client)::text = 'CHOICE'::text) AND ((detailed_report.group_id)::text = 'admin'::text) AND (detailed_report.end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-07-01 00:00:00+00'::timestamp with time zone))\nTotal runtime: 137728.029 ms(12 rows)Tried by creating partial Index on group_id column for the value 'admin' and also end_time column for one month range.\nWith all the above experiment, could not reduce the response time, please suggest.", "msg_date": "Fri, 6 Dec 2013 23:06:58 +0530", "msg_from": "chidamparam muthusamy <[email protected]>", "msg_from_op": true, "msg_subject": "postgres performance" }, { "msg_contents": "On Friday, December 06, 2013 11:06:58 PM chidamparam muthusamy wrote:\n> hi,\n> Registered with PostgreSQL Help Forum to identify and resolve the Postgres\n> DB performance issue, received suggestions but could not improve the\n> speed/response time. Please help.\n> \n> Details:\n> Postgres Version 9.3.1\n> Server configuration:\n> Processor: 2 x Intel Quad core E5620 @ 2.40GHz\n> RAM: 16 GB\n> \n> Postgres configuration:\n> Effective cache size = 10 GB\n> shared Buffer = 1250 MB\n> random page cost = 4\n> \n> Table size = 60 GB\n> Number of records = 44 million\n> Carried out Vacuum Analyze after inserting new records and also after\n> creating Index,\n> 6 months data, every month around 10 GB will get added. Expecting good\n> performance with 3 years data.\n> DB Will be used for Reporting/Read, will not be used for transaction. Daily\n> records will be inserted through bulk insertion every day.\n\nSuggestions:\n\nPartition by month.\n\nAdd many more disks, in RAID-10.\n or move to SSD.\n\nAdd a lot more RAM.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 06 Dec 2013 10:16:03 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance" }, { "msg_contents": "On 06/12/13 17:36, chidamparam muthusamy wrote:\n\nI rather think Alan is right - you either want a lot more RAM or faster \ndisks. Have a look at your first query...\n\n> Query:\n> EXPLAIN (analyze, buffers) SELECT text(client) as client, text(gateway)\n> as gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0)\n> ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd,\n> sum(call_duration_recv)/1000.0 as duration_recv,\n> sum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\n> call_amount_recv, sum(call_amount_pay) as call_amount_\n> pay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and\n> end_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE'\n> GROUP by client, gateway ORDER BY call_amount_recv DESC;\n\n> QUERY PLAN\n> ------------------------------------------------------\n> Sort (cost=3422863.06..3422868.69 rows=2254 width=44) (actual\n> time=137852.474..137852.474 rows=5 loops=1)\n> Sort Key: (sum(call_amount_recv))\n> Sort Method: quicksort Memory: 25kB\n> Buffers: shared read=2491664\n\n> -> HashAggregate (cost=3422664.28..3422737.53 rows=2254 width=44)\n> (actual time=137852.402..137852.454 rows=5 loops=1)\n> Buffers: shared read=2491664\n\n> -> Bitmap Heap Scan on detailed_report (cost=644828.11..3399506.87\n> rows=1029218 width=44) (actual time=4499.558..125443.122 rows=5248227\n> loops=1)\n> Recheck Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with\n> time zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\n> zone) AND ((group_id)::text = 'adm\n> in'::text) AND ((client)::text = 'CHOICE'::text))\n> Buffers: shared read=2491664\n\n> -> Bitmap Index Scan on endtime_groupid_client_tsidx_detail_report\n> (cost=0.00..644570.81 rows=1029218 width=0) (actual\n> time=3418.754..3418.754 rows=5248227 loops=1)\n> Index Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time\n> zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\n> zone) AND ((group_id)::text =\n> 'admin'::text) AND ((client)::text = 'CHOICE'::text))\n> Buffers: shared read=95055\n\n> Total runtime: *137868.946 ms*\n> (13 rows)\n\nThe index is being used, but most of your time is going on the \"Bitmap \nHeap Scan\". You're processing 5.2 million rows in about 120 seconds - \nthat's about 43 rows per millisecond - not too bad. It's not getting any \ncache hits though, it's having to read all the blocks. Looking at the \nnumber of blocks, that's ~2.5 million at 8KB each or about 20GB. You \njust don't have the RAM to cache that.\n\nIf you have lots of similar reporting queries to run, you might get away \nwith dropping the index and letting them run in parallel. Each \nindividual query would be slow but they should be smart enough to share \neach other's sequential scans - the disks would basically be looping \nthrough you data continuously.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 06 Dec 2013 18:37:13 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance" }, { "msg_contents": "On 6.12.2013 18:36, chidamparam muthusamy wrote:\n> hi,\n> Registered with PostgreSQL Help Forum to identify and resolve the\n> Postgres DB performance issue, received suggestions but could not\n> improve the speed/response time. Please help.\n> \n> Details:\n> Postgres Version 9.3.1\n> Server configuration:\n> Processor: 2 x Intel Quad core E5620 @ 2.40GHz\n> RAM: 16 GB\n> \n> Postgres configuration:\n> Effective cache size = 10 GB\n> shared Buffer = 1250 MB\n> random page cost = 4\n> \n> Table size = 60 GB\n> Number of records = 44 million \n> Carried out Vacuum Analyze after inserting new records and also after\n> creating Index,\n> 6 months data, every month around 10 GB will get added. Expecting good\n> performance with 3 years data.\n\nSo, what is good performance? What times do you need to achieve for the\nqueries you've posted?\n\nIt's difficult to read the explain plans wrapped in the message, so I've\npasted some of them into explain.depesz.com:\n\n http://explain.depesz.com/s/9SH\n http://explain.depesz.com/s/hFp\n\nThe estimates are very accurate, and as Richard Huxton pointed out, the\ndominating part is the bitmap heap scan. I assume this is because or\nreading the data from disk. Can you check iostat/vmstat while the\nqueries are running? Are you CPU or I/O bound? I'd guess the latter.\n\nIn that case, adding more RAM / more powerful I/O is probably the\neasiest way to improve the performance. And a partitioning (but that\ndepends on the queries, as it may improve some and hurt others).\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 07 Dec 2013 01:13:29 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance" }, { "msg_contents": "hi,\nthank you so much for the input.\nCan you please clarify the following points:\n*1. Output of BitmapAnd = 303660 rows*\n -> BitmapAnd (cost=539314.51..539314.51 rows=303660 width=0) (actual\ntime=9083.085..9083.085 rows=0 loops=1)\n -> Bitmap Index Scan on groupid_index\n (cost=0.00..164070.62 rows=7998674 width=0) (actual\ntime=2303.788..2303.788 rows=7840766 loops=1)\n Index Cond: ((detailed_report.group_id)::text =\n'CHOICE'::text)\n -> Bitmap Index Scan on client_index\n (cost=0.00..175870.62 rows=7998674 width=0) (actual\ntime=2879.691..2879.691 rows=7840113 loops=1)\n Index Cond: ((detailed_report.client)::text =\n'ChoiceFone'::text)\n -> Bitmap Index Scan on partial_endtime_index\n (cost=0.00..199145.02 rows=9573259 width=0) (actual\ntime=1754.044..1754.044 rows=9476589 loops=1)\n Index Cond: ((detailed_report.end_time >=\n'2013-05-01 00:00:00+00'::timestamp with time zone) AND\n(detailed_report.end_time < '2013-06-01 00:00:00+00'::timestamp wi\nth time zone))\n\n*2. In the Next outer node Bitmap Heap Scan, estimated rows = 303660 and\nactual rows = 2958392, why huge difference ? How to bring it down. *\nBitmap Heap Scan on public.detailed_report (cost=539314.51..1544589.52\nrows=303660 width=44) (actual time=9619.913..51757.911 rows=2958392 loops=1)\n\n*3. what is the cause for Recheck, is it possible to reduce the time taken\nfor Recheck ?*\nRecheck Cond: (((detailed_report.group_id)::text = 'CHOICE'::text) AND\n((detailed_report.client)::text = 'ChoiceFone'::text) AND\n(detailed_report.end_time >= '2013-05-01 00:00:\n00+00'::timestamp with time zone) AND (detailed_report.end_time <\n'2013-06-01 00:00:00+00'::timestamp with time zone))\n\nthanks\n\n\nOn Sat, Dec 7, 2013 at 12:07 AM, Richard Huxton <[email protected]> wrote:\n\n> On 06/12/13 17:36, chidamparam muthusamy wrote:\n>\n> I rather think Alan is right - you either want a lot more RAM or faster\n> disks. Have a look at your first query...\n>\n>\n> Query:\n>> EXPLAIN (analyze, buffers) SELECT text(client) as client, text(gateway)\n>> as gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0)\n>> ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd,\n>> sum(call_duration_recv)/1000.0 as duration_recv,\n>> sum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\n>> call_amount_recv, sum(call_amount_pay) as call_amount_\n>> pay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and\n>> end_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE'\n>> GROUP by client, gateway ORDER BY call_amount_recv DESC;\n>>\n>\n> QUERY PLAN\n>> ------------------------------------------------------\n>> Sort (cost=3422863.06..3422868.69 rows=2254 width=44) (actual\n>> time=137852.474..137852.474 rows=5 loops=1)\n>> Sort Key: (sum(call_amount_recv))\n>> Sort Method: quicksort Memory: 25kB\n>> Buffers: shared read=2491664\n>>\n>\n> -> HashAggregate (cost=3422664.28..3422737.53 rows=2254 width=44)\n>> (actual time=137852.402..137852.454 rows=5 loops=1)\n>> Buffers: shared read=2491664\n>>\n>\n> -> Bitmap Heap Scan on detailed_report (cost=644828.11..3399506.87\n>> rows=1029218 width=44) (actual time=4499.558..125443.122 rows=5248227\n>> loops=1)\n>> Recheck Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with\n>> time zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\n>> zone) AND ((group_id)::text = 'adm\n>> in'::text) AND ((client)::text = 'CHOICE'::text))\n>> Buffers: shared read=2491664\n>>\n>\n> -> Bitmap Index Scan on endtime_groupid_client_tsidx_detail_report\n>> (cost=0.00..644570.81 rows=1029218 width=0) (actual\n>> time=3418.754..3418.754 rows=5248227 loops=1)\n>> Index Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time\n>> zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\n>> zone) AND ((group_id)::text =\n>> 'admin'::text) AND ((client)::text = 'CHOICE'::text))\n>> Buffers: shared read=95055\n>>\n>\n> Total runtime: *137868.946 ms*\n>> (13 rows)\n>>\n>\n> The index is being used, but most of your time is going on the \"Bitmap\n> Heap Scan\". You're processing 5.2 million rows in about 120 seconds -\n> that's about 43 rows per millisecond - not too bad. It's not getting any\n> cache hits though, it's having to read all the blocks. Looking at the\n> number of blocks, that's ~2.5 million at 8KB each or about 20GB. You just\n> don't have the RAM to cache that.\n>\n> If you have lots of similar reporting queries to run, you might get away\n> with dropping the index and letting them run in parallel. Each individual\n> query would be slow but they should be smart enough to share each other's\n> sequential scans - the disks would basically be looping through you data\n> continuously.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\nhi,thank you so much for the input.\nCan you please clarify the following points:1. Output of BitmapAnd = 303660 rows ->  BitmapAnd  (cost=539314.51..539314.51 rows=303660 width=0) (actual time=9083.085..9083.085 rows=0 loops=1)\n                     ->  Bitmap Index Scan on groupid_index  (cost=0.00..164070.62 rows=7998674 width=0) (actual time=2303.788..2303.788 rows=7840766 loops=1)                           Index Cond: ((detailed_report.group_id)::text = 'CHOICE'::text)\n                     ->  Bitmap Index Scan on client_index  (cost=0.00..175870.62 rows=7998674 width=0) (actual time=2879.691..2879.691 rows=7840113 loops=1)                           Index Cond: ((detailed_report.client)::text = 'ChoiceFone'::text)\n                     ->  Bitmap Index Scan on partial_endtime_index  (cost=0.00..199145.02 rows=9573259 width=0) (actual time=1754.044..1754.044 rows=9476589 loops=1)                           Index Cond: ((detailed_report.end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-06-01 00:00:00+00'::timestamp wi\nth time zone))2.  In the Next outer node Bitmap Heap Scan, estimated rows = 303660 and actual rows = 2958392, why huge difference ? How to bring it down. \nBitmap Heap Scan on public.detailed_report  (cost=539314.51..1544589.52 rows=303660 width=44) (actual time=9619.913..51757.911 rows=2958392 loops=1)\n3. what is the cause for Recheck, is it possible to reduce the time taken for Recheck ?  Recheck Cond: (((detailed_report.group_id)::text = 'CHOICE'::text) AND ((detailed_report.client)::text = 'ChoiceFone'::text) AND (detailed_report.end_time >= '2013-05-01 00:00:\n00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-06-01 00:00:00+00'::timestamp with time zone))\nthanks\nOn Sat, Dec 7, 2013 at 12:07 AM, Richard Huxton <[email protected]> wrote:\nOn 06/12/13 17:36, chidamparam muthusamy wrote:\n\nI rather think Alan is right - you either want a lot more RAM or faster disks. Have a look at your first query...\n\n\nQuery:\nEXPLAIN (analyze, buffers) SELECT text(client) as client, text(gateway)\nas gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0)\n::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd,\nsum(call_duration_recv)/1000.0 as duration_recv,\nsum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\ncall_amount_recv, sum(call_amount_pay) as call_amount_\npay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and\nend_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE'\nGROUP by client, gateway ORDER BY call_amount_recv DESC;\n\n\n\nQUERY PLAN\n------------------------------------------------------\nSort (cost=3422863.06..3422868.69 rows=2254 width=44) (actual\ntime=137852.474..137852.474 rows=5 loops=1)\nSort Key: (sum(call_amount_recv))\nSort Method: quicksort Memory: 25kB\nBuffers: shared read=2491664\n\n\n\n-> HashAggregate (cost=3422664.28..3422737.53 rows=2254 width=44)\n(actual time=137852.402..137852.454 rows=5 loops=1)\nBuffers: shared read=2491664\n\n\n\n-> Bitmap Heap Scan on detailed_report (cost=644828.11..3399506.87\nrows=1029218 width=44) (actual time=4499.558..125443.122 rows=5248227\nloops=1)\nRecheck Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with\ntime zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\nzone) AND ((group_id)::text = 'adm\nin'::text) AND ((client)::text = 'CHOICE'::text))\nBuffers: shared read=2491664\n\n\n\n-> Bitmap Index Scan on endtime_groupid_client_tsidx_detail_report\n(cost=0.00..644570.81 rows=1029218 width=0) (actual\ntime=3418.754..3418.754 rows=5248227 loops=1)\nIndex Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time\nzone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\nzone) AND ((group_id)::text =\n'admin'::text) AND ((client)::text = 'CHOICE'::text))\nBuffers: shared read=95055\n\n\n\nTotal runtime: *137868.946 ms*\n(13 rows)\n\n\nThe index is being used, but most of your time is going on the \"Bitmap Heap Scan\". You're processing 5.2 million rows in about 120 seconds - that's about 43 rows per millisecond - not too bad. It's not getting any cache hits though, it's having to read all the blocks. Looking at the number of blocks, that's ~2.5 million at 8KB each or about 20GB. You just don't have the RAM to cache that.\n\nIf you have lots of similar reporting queries to run, you might get away with dropping the index and letting them run in parallel. Each individual query would be slow but they should be smart enough to share each other's sequential scans - the disks would basically be looping through you data continuously.\n\n-- \n  Richard Huxton\n  Archonet Ltd", "msg_date": "Sat, 7 Dec 2013 16:00:24 +0530", "msg_from": "chidamparam muthusamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres performance" }, { "msg_contents": "2013/12/7 chidamparam muthusamy <[email protected]>\n\n> hi,\n> thank you so much for the input.\n> Can you please clarify the following points:\n> *1. Output of BitmapAnd = 303660 rows*\n> -> BitmapAnd (cost=539314.51..539314.51 rows=303660 width=0) (actual\n> time=9083.085..9083.085 rows=0 loops=1)\n> -> Bitmap Index Scan on groupid_index\n> (cost=0.00..164070.62 rows=7998674 width=0) (actual\n> time=2303.788..2303.788 rows=7840766 loops=1)\n> Index Cond: ((detailed_report.group_id)::text =\n> 'CHOICE'::text)\n> -> Bitmap Index Scan on client_index\n> (cost=0.00..175870.62 rows=7998674 width=0) (actual\n> time=2879.691..2879.691 rows=7840113 loops=1)\n> Index Cond: ((detailed_report.client)::text =\n> 'ChoiceFone'::text)\n> -> Bitmap Index Scan on partial_endtime_index\n> (cost=0.00..199145.02 rows=9573259 width=0) (actual\n> time=1754.044..1754.044 rows=9476589 loops=1)\n> Index Cond: ((detailed_report.end_time >=\n> '2013-05-01 00:00:00+00'::timestamp with time zone) AND\n> (detailed_report.end_time < '2013-06-01 00:00:00+00'::timestamp wi\n> th time zone))\n>\n> *2. In the Next outer node Bitmap Heap Scan, estimated rows = 303660 and\n> actual rows = 2958392, why huge difference ? How to bring it down. *\n> Bitmap Heap Scan on public.detailed_report (cost=539314.51..1544589.52\n> rows=303660 width=44) (actual time=9619.913..51757.911 rows=2958392 loops=1)\n>\n> *3. what is the cause for Recheck, is it possible to reduce the time taken\n> for Recheck ?*\n> Recheck Cond: (((detailed_report.group_id)::text = 'CHOICE'::text) AND\n> ((detailed_report.client)::text = 'ChoiceFone'::text) AND\n> (detailed_report.end_time >= '2013-05-01 00:00:\n> 00+00'::timestamp with time zone) AND (detailed_report.end_time <\n> '2013-06-01 00:00:00+00'::timestamp with time zone))\n>\n> thanks\n>\n>\n> On Sat, Dec 7, 2013 at 12:07 AM, Richard Huxton <[email protected]> wrote:\n>\n>> On 06/12/13 17:36, chidamparam muthusamy wrote:\n>>\n>> I rather think Alan is right - you either want a lot more RAM or faster\n>> disks. Have a look at your first query...\n>>\n>>\n>> Query:\n>>> EXPLAIN (analyze, buffers) SELECT text(client) as client, text(gateway)\n>>> as gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0)\n>>> ::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd,\n>>> sum(call_duration_recv)/1000.0 as duration_recv,\n>>> sum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\n>>> call_amount_recv, sum(call_amount_pay) as call_amount_\n>>> pay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and\n>>> end_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE'\n>>> GROUP by client, gateway ORDER BY call_amount_recv DESC;\n>>>\n>>\n>> QUERY PLAN\n>>> ------------------------------------------------------\n>>> Sort (cost=3422863.06..3422868.69 rows=2254 width=44) (actual\n>>> time=137852.474..137852.474 rows=5 loops=1)\n>>> Sort Key: (sum(call_amount_recv))\n>>> Sort Method: quicksort Memory: 25kB\n>>> Buffers: shared read=2491664\n>>>\n>>\n>> -> HashAggregate (cost=3422664.28..3422737.53 rows=2254 width=44)\n>>> (actual time=137852.402..137852.454 rows=5 loops=1)\n>>> Buffers: shared read=2491664\n>>>\n>>\n>> -> Bitmap Heap Scan on detailed_report (cost=644828.11..3399506.87\n>>> rows=1029218 width=44) (actual time=4499.558..125443.122 rows=5248227\n>>> loops=1)\n>>> Recheck Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with\n>>> time zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\n>>> zone) AND ((group_id)::text = 'adm\n>>> in'::text) AND ((client)::text = 'CHOICE'::text))\n>>> Buffers: shared read=2491664\n>>>\n>>\n>> -> Bitmap Index Scan on endtime_groupid_client_tsidx_detail_report\n>>> (cost=0.00..644570.81 rows=1029218 width=0) (actual\n>>> time=3418.754..3418.754 rows=5248227 loops=1)\n>>> Index Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time\n>>> zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\n>>> zone) AND ((group_id)::text =\n>>> 'admin'::text) AND ((client)::text = 'CHOICE'::text))\n>>> Buffers: shared read=95055\n>>>\n>>\n>> Total runtime: *137868.946 ms*\n>>> (13 rows)\n>>>\n>>\n>> The index is being used, but most of your time is going on the \"Bitmap\n>> Heap Scan\". You're processing 5.2 million rows in about 120 seconds -\n>> that's about 43 rows per millisecond - not too bad. It's not getting any\n>> cache hits though, it's having to read all the blocks. Looking at the\n>> number of blocks, that's ~2.5 million at 8KB each or about 20GB. You just\n>> don't have the RAM to cache that.\n>>\n>> If you have lots of similar reporting queries to run, you might get away\n>> with dropping the index and letting them run in parallel. Each individual\n>> query would be slow but they should be smart enough to share each other's\n>> sequential scans - the disks would basically be looping through you data\n>> continuously.\n>>\n>> --\n>> Richard Huxton\n>> Archonet Ltd\n>>\n>\n>\nHi,\n about point 3, if I remembr correctly, the problem is that the\nmodule that create the bitmap index could choose between not lossy or\nlossy. The problem is correlated to the max number of tuples inside a\nblock ( 256 for 8kb block) , so if you not have enought work_memory , the\nmodule switches to the lossy storage (that use only 1 bit for a disk page)\nand so your backend process have to do the recheck condition on the tuples\nread from table.\n\nYou could try to increase work_mem (better) to avoid the module switches\nfrom not lossy bitmap to lossy bitmap, or try to disable the\nenable_bitmapscan (set enable_bitmapscan=off) to see if you could gain\nsomething.\n\nAbount point 1 , it's doing a bitwise and operation between the bitmap\nindexes so it use both 3 bitmap indexes to apply the predicates of the\nquery.\n\nAbout point 2 it depends on statistics, it's possible you are not analyzing\nenough rows of the table, by the way the important thing is that your plans\nare table and \"good\".\n\nMoreover it will be interesting to know what type of storage and filesystem\nyou are using, are you monitoring the latency of your storage ?\n\nDid you try the effective_io_concurrency to speed up bitmap heap scan ?\nsee here<http://www.postgresql.org/docs/9.3/static/runtime-config-resource.html>\n\n\n\n\nBye\n\nMat\n\n2013/12/7 chidamparam muthusamy <[email protected]>\nhi,\nthank you so much for the input.\nCan you please clarify the following points:1. Output of BitmapAnd = 303660 rows ->  BitmapAnd  (cost=539314.51..539314.51 rows=303660 width=0) (actual time=9083.085..9083.085 rows=0 loops=1)\n                     ->  Bitmap Index Scan on groupid_index  (cost=0.00..164070.62 rows=7998674 width=0) (actual time=2303.788..2303.788 rows=7840766 loops=1)                           Index Cond: ((detailed_report.group_id)::text = 'CHOICE'::text)\n                     ->  Bitmap Index Scan on client_index  (cost=0.00..175870.62 rows=7998674 width=0) (actual time=2879.691..2879.691 rows=7840113 loops=1)                           Index Cond: ((detailed_report.client)::text = 'ChoiceFone'::text)\n                     ->  Bitmap Index Scan on partial_endtime_index  (cost=0.00..199145.02 rows=9573259 width=0) (actual time=1754.044..1754.044 rows=9476589 loops=1)                           Index Cond: ((detailed_report.end_time >= '2013-05-01 00:00:00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-06-01 00:00:00+00'::timestamp wi\nth time zone))2.  In the Next outer node Bitmap Heap Scan, estimated rows = 303660 and actual rows = 2958392, why huge difference ? How to bring it down. \nBitmap Heap Scan on public.detailed_report  (cost=539314.51..1544589.52 rows=303660 width=44) (actual time=9619.913..51757.911 rows=2958392 loops=1)\n3. what is the cause for Recheck, is it possible to reduce the time taken for Recheck ?  Recheck Cond: (((detailed_report.group_id)::text = 'CHOICE'::text) AND ((detailed_report.client)::text = 'ChoiceFone'::text) AND (detailed_report.end_time >= '2013-05-01 00:00:\n00+00'::timestamp with time zone) AND (detailed_report.end_time < '2013-06-01 00:00:00+00'::timestamp with time zone))\n\nthanks\nOn Sat, Dec 7, 2013 at 12:07 AM, Richard Huxton <[email protected]> wrote:\n\nOn 06/12/13 17:36, chidamparam muthusamy wrote:\n\nI rather think Alan is right - you either want a lot more RAM or faster disks. Have a look at your first query...\n\n\nQuery:\nEXPLAIN (analyze, buffers) SELECT text(client) as client, text(gateway)\nas gateway,count(*)::bigint as total_calls, (avg(duration)/1000.0)\n::numeric(10,2) as acd, (avg(pdd)) ::numeric(10,2) as pdd,\nsum(call_duration_recv)/1000.0 as duration_recv,\nsum(call_duration_pay)/1000.0 as duration_pay, sum(call_amount_recv) as\ncall_amount_recv, sum(call_amount_pay) as call_amount_\npay FROM detailed_report WHERE end_time>='2013-05-01 00:00' and\nend_time<'2013-07-01 00:00' and group_id='admin' and client ='CHOICE'\nGROUP by client, gateway ORDER BY call_amount_recv DESC;\n\n\n\nQUERY PLAN\n------------------------------------------------------\nSort (cost=3422863.06..3422868.69 rows=2254 width=44) (actual\ntime=137852.474..137852.474 rows=5 loops=1)\nSort Key: (sum(call_amount_recv))\nSort Method: quicksort Memory: 25kB\nBuffers: shared read=2491664\n\n\n\n-> HashAggregate (cost=3422664.28..3422737.53 rows=2254 width=44)\n(actual time=137852.402..137852.454 rows=5 loops=1)\nBuffers: shared read=2491664\n\n\n\n-> Bitmap Heap Scan on detailed_report (cost=644828.11..3399506.87\nrows=1029218 width=44) (actual time=4499.558..125443.122 rows=5248227\nloops=1)\nRecheck Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with\ntime zone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\nzone) AND ((group_id)::text = 'adm\nin'::text) AND ((client)::text = 'CHOICE'::text))\nBuffers: shared read=2491664\n\n\n\n-> Bitmap Index Scan on endtime_groupid_client_tsidx_detail_report\n(cost=0.00..644570.81 rows=1029218 width=0) (actual\ntime=3418.754..3418.754 rows=5248227 loops=1)\nIndex Cond: ((end_time >= '2013-05-01 00:00:00+00'::timestamp with time\nzone) AND (end_time < '2013-07-01 00:00:00+00'::timestamp with time\nzone) AND ((group_id)::text =\n'admin'::text) AND ((client)::text = 'CHOICE'::text))\nBuffers: shared read=95055\n\n\n\nTotal runtime: *137868.946 ms*\n(13 rows)\n\n\nThe index is being used, but most of your time is going on the \"Bitmap Heap Scan\". You're processing 5.2 million rows in about 120 seconds - that's about 43 rows per millisecond - not too bad. It's not getting any cache hits though, it's having to read all the blocks. Looking at the number of blocks, that's ~2.5 million at 8KB each or about 20GB. You just don't have the RAM to cache that.\n\nIf you have lots of similar reporting queries to run, you might get away with dropping the index and letting them run in parallel. Each individual query would be slow but they should be smart enough to share each other's sequential scans - the disks would basically be looping through you data continuously.\n\n-- \n  Richard Huxton\n  Archonet Ltd\nHi,       about point 3, if I remembr correctly, the \nproblem is that the module that create the bitmap index could choose \nbetween not lossy or lossy. The problem is  correlated to the \nmax number of  tuples inside  a block ( 256 for 8kb block) , so if you not have enought work_memory , the module switches  to the lossy storage (that use only 1 bit for a disk page) and so your backend process have to do the recheck condition on the tuples read from table.\nYou could try to increase work_mem (better) to avoid the module  switches from not lossy bitmap to lossy bitmap, or try  to disable the enable_bitmapscan   (set enable_bitmapscan=off)  to see if you could gain something.\nAbount point 1 , it's doing a bitwise and  operation  between the bitmap indexes  so it use both 3 bitmap indexes to apply the predicates of the query.About point 2 it depends on statistics, it's possible you are not analyzing enough rows of the table, by the way the important thing is that your plans are table and \"good\".\nMoreover it will be interesting to know what type of storage and filesystem you are using, are you monitoring the latency of your storage ?Did you try the effective_io_concurrency to speed up bitmap heap scan ?  see here  \n ByeMat", "msg_date": "Sat, 7 Dec 2013 13:55:41 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance" } ]
[ { "msg_contents": "I am trying to debug some shared memory issues with Postgres 9.3.1 and\nCentOS release 6.3 (Final). I have a database machine that probably has\nsome misconfigured shared memory settings. It's getting into 2+ GB of\nswap. Restarting postgres frees all of the memory, but after a few hours\nof normal usage it will go back into swap. During light usage, postgres\nwill *very* slowly release some memory, but not all. Using top, I can see\nthat many of the postgres connections are using shared memory:\n\n```\ntop - 09:38:16 up 1 day, 21:21, 3 users, load average: 0.40, 0.54, 0.45\nTasks: 253 total, 2 running, 251 sleeping, 0 stopped, 0 zombie\nCpu(s): 0.7%us, 0.2%sy, 0.0%ni, 97.8%id, 1.2%wa, 0.0%hi, 0.0%si,\n 0.0%st\nMem: 6998260k total, 6849048k used, 149212k free, 248k buffers\nSwap: 440478516k total, 1981912k used, 438496604k free, 1541356k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 3534 postgres 20 0 2330m 1.4g 1.1g S 0.0 20.4 1:06.99 postgres:\ndeploy mtalcott 10.222.154.172(53495) idle\n 9143 postgres 20 0 2221m 1.1g 983m S 0.0 16.9 0:14.75 postgres:\ndeploy mtalcott 10.222.154.167(35811) idle\n 6026 postgres 20 0 2341m 1.1g 864m S 0.0 16.4 0:46.56 postgres:\ndeploy mtalcott 10.222.154.167(37110) idle\n18538 postgres 20 0 2327m 1.1g 865m S 0.0 16.1 2:06.59 postgres:\ndeploy mtalcott 10.222.154.172(47796) idle\n 1575 postgres 20 0 2358m 1.1g 858m S 0.0 15.9 1:41.76 postgres:\ndeploy mtalcott 10.222.154.172(52560) idle\n```\n\nThere are about 29 total idle connections. `sudo ipcs -m` only shows:\n```\n ------ Shared Memory Segments --------\n key shmid owner perms bytes nattch status\n 0x0052e2c1 163840 postgres 600 48 21\n```\n\nSurprisingly, it only shows it using 48 bytes. Any ideas why that would be?\n\nMy shared memory settings are:\nkernel.shmmax = 8589934592 # 8 GB\nkernel.shmall = 2097152 # * 4096 = 8 GB\nkernel.shmmni = 4096\n\nDo I need to set lower shared memory limits? In the past, I've run into\nissues using pg_dump and executing larger transactions with lower values.\n If I can monitor the shared memory segment I can better understand when\npostgres is allocating and releasing..\n\nI am trying to debug some shared memory issues with Postgres 9.3.1 and CentOS release 6.3 (Final).  I have a database machine that probably has some misconfigured shared memory settings.  It's getting into 2+ GB of swap.  Restarting postgres frees all of the memory, but after a few hours of normal usage it will go back into swap.  During light usage, postgres will *very* slowly release some memory, but not all.  Using top, I can see that many of the postgres connections are using shared memory:\n```top - 09:38:16 up 1 day, 21:21,  3 users,  load average: 0.40, 0.54, 0.45Tasks: 253 total,   2 running, 251 sleeping,   0 stopped,   0 zombieCpu(s):  0.7%us,  0.2%sy,  0.0%ni, 97.8%id,  1.2%wa,  0.0%hi,  0.0%si,  0.0%st\n\nMem:   6998260k total,  6849048k used,   149212k free,      248k buffersSwap: 440478516k total,  1981912k used, 438496604k free,  1541356k cached  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n\n 3534 postgres  20   0 2330m 1.4g 1.1g S  0.0 20.4   1:06.99 postgres: deploy mtalcott 10.222.154.172(53495) idle 9143 postgres  20   0 2221m 1.1g 983m S  0.0 16.9   0:14.75 postgres: deploy mtalcott 10.222.154.167(35811) idle\n\n 6026 postgres  20   0 2341m 1.1g 864m S  0.0 16.4   0:46.56 postgres: deploy mtalcott 10.222.154.167(37110) idle\n\n18538 postgres  20   0 2327m 1.1g 865m S  0.0 16.1   2:06.59 postgres: deploy mtalcott 10.222.154.172(47796) idle\n\n 1575 postgres  20   0 2358m 1.1g 858m S  0.0 15.9   1:41.76 postgres: deploy mtalcott 10.222.154.172(52560) idle \n\n```There are about 29 total idle connections.  `sudo ipcs -m` only shows:```  ------ Shared Memory Segments --------\n  key        shmid      owner      perms      bytes      nattch     status  0x0052e2c1 163840     postgres   600        48         21\n```Surprisingly, it only shows it using 48 bytes.  Any ideas why that would be?\n\nMy shared memory settings are:kernel.shmmax = 8589934592  # 8 GBkernel.shmall = 2097152     # * 4096 = 8 GB  kernel.shmmni = 4096Do I need to set lower shared memory limits?  In the past, I've run into issues using pg_dump and executing larger transactions with lower values.  If I can monitor the shared memory segment I can better understand when postgres is allocating and releasing..", "msg_date": "Fri, 6 Dec 2013 19:21:20 -0800", "msg_from": "Mack Talcott <[email protected]>", "msg_from_op": true, "msg_subject": "Debugging shared memory issues on CentOS" }, { "msg_contents": "Mack Talcott <[email protected]> writes:\n> I am trying to debug some shared memory issues with Postgres 9.3.1 and\n> CentOS release 6.3 (Final). I have a database machine that probably has\n> some misconfigured shared memory settings. It's getting into 2+ GB of\n> swap. Restarting postgres frees all of the memory, but after a few hours\n> of normal usage it will go back into swap.\n\nAre you sure the kernel isn't just swapping out some idle processes\nbecause it feels like it? These numbers don't exactly look like a\nmachine under stress:\n\n> top - 09:38:16 up 1 day, 21:21, 3 users, load average: 0.40, 0.54, 0.45\n> Tasks: 253 total, 2 running, 251 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 0.7%us, 0.2%sy, 0.0%ni, 97.8%id, 1.2%wa, 0.0%hi, 0.0%si,\n> 0.0%st\n> Mem: 6998260k total, 6849048k used, 149212k free, 248k buffers\n> Swap: 440478516k total, 1981912k used, 438496604k free, 1541356k cached\n\nIn particular, you've got 1.5 gig of filesystem cache, so you're hardly\nout of memory. I don't know where the other 5.5 gig of RAM went, but\nit doesn't look like postgres is eating it; what else is running on\nthis box?\n\nThese lines look absolutely normal, assuming that you've configured\nshared_buffers somewhere in the neighborhood of 1GB:\n\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 3534 postgres 20 0 2330m 1.4g 1.1g S 0.0 20.4 1:06.99 postgres:\n> deploy mtalcott 10.222.154.172(53495) idle\n> 9143 postgres 20 0 2221m 1.1g 983m S 0.0 16.9 0:14.75 postgres:\n> deploy mtalcott 10.222.154.167(35811) idle\n> 6026 postgres 20 0 2341m 1.1g 864m S 0.0 16.4 0:46.56 postgres:\n> deploy mtalcott 10.222.154.167(37110) idle\n> 18538 postgres 20 0 2327m 1.1g 865m S 0.0 16.1 2:06.59 postgres:\n> deploy mtalcott 10.222.154.172(47796) idle\n> 1575 postgres 20 0 2358m 1.1g 858m S 0.0 15.9 1:41.76 postgres:\n> deploy mtalcott 10.222.154.172(52560) idle\n\nThe key thing to realize about that is that the SHR column is *shared*\nmemory, ie all these processes are referencing the same chunk of about 1GB\nworth of memory. The process-specific memory is RES minus SHR, and none\nof those processes seem tremendously out of line on that measure. (Note:\nthe fact that the SHR values aren't all exactly the same is because top\ndoesn't count a shared page until the process has physically touched that\npage. Even the guy with 1.1g of SHR might not have touched all of the\nshared storage yet.)\n\nI'm not sure you have a problem here. If you do, these figures aren't\nshowing it. Having some stuff shoved out to swap is not a problem unless\nyou have a problem with the swap I/O rate. You might try watching \"vmstat\n1\" for awhile to see if the si/so columns show significant activity.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 23:54:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Debugging shared memory issues on CentOS" }, { "msg_contents": "On Tue, Dec 10, 2013 at 8:54 PM, Tom Lane <[email protected]> wrote:\n> Mack Talcott <[email protected]> writes:\n>> I am trying to debug some shared memory issues with Postgres 9.3.1 and\n>> CentOS release 6.3 (Final). I have a database machine that probably has\n>> some misconfigured shared memory settings. It's getting into 2+ GB of\n>> swap. Restarting postgres frees all of the memory, but after a few hours\n>> of normal usage it will go back into swap.\n>\n> Are you sure the kernel isn't just swapping out some idle processes\n> because it feels like it? These numbers don't exactly look like a\n> machine under stress:\n>\n>> top - 09:38:16 up 1 day, 21:21, 3 users, load average: 0.40, 0.54, 0.45\n>> Tasks: 253 total, 2 running, 251 sleeping, 0 stopped, 0 zombie\n>> Cpu(s): 0.7%us, 0.2%sy, 0.0%ni, 97.8%id, 1.2%wa, 0.0%hi, 0.0%si,\n>> 0.0%st\n>> Mem: 6998260k total, 6849048k used, 149212k free, 248k buffers\n>> Swap: 440478516k total, 1981912k used, 438496604k free, 1541356k cached\n>\n> In particular, you've got 1.5 gig of filesystem cache, so you're hardly\n> out of memory. I don't know where the other 5.5 gig of RAM went, but\n> it doesn't look like postgres is eating it; what else is running on\n> this box?\n>\n> These lines look absolutely normal, assuming that you've configured\n> shared_buffers somewhere in the neighborhood of 1GB:\n>\n>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>> 3534 postgres 20 0 2330m 1.4g 1.1g S 0.0 20.4 1:06.99 postgres:\n>> deploy mtalcott 10.222.154.172(53495) idle\n>> 9143 postgres 20 0 2221m 1.1g 983m S 0.0 16.9 0:14.75 postgres:\n>> deploy mtalcott 10.222.154.167(35811) idle\n>> 6026 postgres 20 0 2341m 1.1g 864m S 0.0 16.4 0:46.56 postgres:\n>> deploy mtalcott 10.222.154.167(37110) idle\n>> 18538 postgres 20 0 2327m 1.1g 865m S 0.0 16.1 2:06.59 postgres:\n>> deploy mtalcott 10.222.154.172(47796) idle\n>> 1575 postgres 20 0 2358m 1.1g 858m S 0.0 15.9 1:41.76 postgres:\n>> deploy mtalcott 10.222.154.172(52560) idle\n>\n> The key thing to realize about that is that the SHR column is *shared*\n> memory, ie all these processes are referencing the same chunk of about 1GB\n> worth of memory. The process-specific memory is RES minus SHR, and none\n> of those processes seem tremendously out of line on that measure. (Note:\n> the fact that the SHR values aren't all exactly the same is because top\n> doesn't count a shared page until the process has physically touched that\n> page. Even the guy with 1.1g of SHR might not have touched all of the\n> shared storage yet.)\n>\n> I'm not sure you have a problem here. If you do, these figures aren't\n> showing it. Having some stuff shoved out to swap is not a problem unless\n> you have a problem with the swap I/O rate. You might try watching \"vmstat\n> 1\" for awhile to see if the si/so columns show significant activity.\n>\n> regards, tom lane\n\nThanks for your reply. I've included the rest of the top output\nbelow. This is a dedicated postgres box, so nothing else is running.\n\nshared_buffers is set to 1.8g, to accommodate some of our larger\noperations. It looks like this could be lowered a bit, since the max\nshared usage is only 1.1g.\n\nThe pattern I am seeing is that postgres processes keep growing in\nshared (this makes sense as they access more of the shared memory, as\nyou've pointed out) but also process-specific memory as they run more\nqueries. The largest ones are using around 300mb of process-specific\nmemory, even when they're idle and outside of any transactions.\n\nAs for CentOS using 1.5g for disk caching, I'm at a loss. I have\nplayed with the 'swappiness', setting it down to 10 from the default\nof 60 with sysctl. It didn't have any effect.\n\nOnce 70-80% of memory is reached, the machine starts using swap, and\nit keeps growing. At first, queries become slightly slower. Then\nsome larger selects start taking 10, then 20, then 30 seconds. During\nthis, vmstat shows 5-20 procs waiting on both CPU and I/O. All of a\nsudden, generally after some large transaction, about 1g of swap is\nreleased and the number of blocked procs jumps to 50-80. Everything\ngrinds to a halt for a few minutes. Sometimes my app can recover, and\nsometimes it needs a little kick.\n\nAs expected, resetting the connection clears the process-specific\nmemory. The same number of connections on the same machine only use\n20% of memory (with 0 swap) when I periodically reconnect. What kind\nof information are these processes holding on to? I would expect\nlong-running, idle postgres processes to have similar memory usage to\nbrand new, idle ones.\n\nOne thing worth mentioning is that I am heavily using schemas. On\nevery request, I am setting and resetting search_path.\n\nThis top was captured just before swap was released\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 3534 postgres 20 0 2330m 1.4g 1.1g S 0.0 20.4 1:06.99 postgres:\ndeploy mtalcott 10.222.155.179(53495) idle\n 9143 postgres 20 0 2221m 1.1g 983m S 0.0 16.9 0:14.75 postgres:\ndeploy mtalcott 10.222.155.164(35811) idle\n 6026 postgres 20 0 2341m 1.1g 864m S 0.0 16.4 0:46.56 postgres:\ndeploy mtalcott 10.222.155.164(37110) idle\n18538 postgres 20 0 2327m 1.1g 865m S 0.0 16.1 2:06.59 postgres:\ndeploy mtalcott 10.222.155.179(47796) idle\n 1575 postgres 20 0 2358m 1.1g 858m S 0.0 15.9 1:41.76 postgres:\ndeploy mtalcott 10.222.155.179(52560) idle\n17931 postgres 20 0 2343m 1.1g 834m S 0.0 15.8 2:04.61 postgres:\ndeploy mtalcott 10.222.155.164(54439) idle\n18286 postgres 20 0 2363m 1.0g 797m S 1.3 15.6 1:54.97 postgres:\ndeploy mtalcott 10.222.155.179(47588) idle\n 4541 postgres 20 0 2343m 1.0g 783m S 0.0 15.2 1:20.75 postgres:\ndeploy mtalcott 10.222.155.179(53938) idle\n18763 postgres 20 0 2347m 1.0g 772m S 0.0 14.9 1:49.83 postgres:\ndeploy mtalcott 10.222.155.164(32853) idle\n 1088 postgres 20 0 2336m 1.0g 778m S 0.3 14.9 1:35.40 postgres:\ndeploy mtalcott 10.222.155.179(52312) idle\n17933 postgres 20 0 2343m 996m 800m S 0.0 14.6 2:11.68 postgres:\ndeploy mtalcott 10.222.155.164(54443) idle\n 1089 postgres 20 0 2310m 970m 776m S 1.7 14.2 1:18.34 postgres:\ndeploy mtalcott 10.222.155.164(46130) idle\n 3535 postgres 20 0 2354m 950m 779m S 0.0 13.9 1:18.44 postgres:\ndeploy mtalcott 10.222.155.164(33599) idle\n 1708 postgres 20 0 2308m 940m 760m S 0.0 13.8 1:08.72 postgres:\ndeploy mtalcott 10.222.155.164(49552) idle\n18540 postgres 20 0 2337m 932m 784m S 0.7 13.6 1:50.66 postgres:\ndeploy mtalcott 10.222.155.164(59856) idle\n 8471 postgres 20 0 2312m 683m 429m S 0.0 10.0 0:54.35 postgres:\ndeploy mtalcott 10.222.155.179(57867) idle\n 5931 postgres 20 0 2327m 589m 340m S 0.0 8.6 0:40.07 postgres:\ndeploy mtalcott 10.222.155.179(55092) idle\n 6070 postgres 20 0 2306m 568m 358m S 0.0 8.3 0:42.56 postgres:\ndeploy mtalcott 10.222.155.179(55307) idle\n 9135 postgres 20 0 2235m 523m 341m S 0.0 7.7 0:19.65 postgres:\ndeploy mtalcott 10.222.155.164(35140) idle\n10996 postgres 20 0 2103m 229m 169m S 0.0 3.4 0:01.65 postgres:\ndeploy mtalcott 10.222.155.179(60798) idle\n11001 postgres 20 0 2062m 163m 144m S 0.7 2.4 0:01.90 postgres:\ndeploy mtalcott 10.222.155.164(44039) idle\n17697 postgres 20 0 2038m 151m 150m S 0.0 2.2 0:09.82 postgres:\ncheckpointer process\n10869 postgres 20 0 2045m 82m 76m S 3.3 1.2 0:12.19 postgres:\ndeploy mtalcott 10.197.52.158(43556) idle in transaction\n10994 postgres 20 0 2052m 61m 50m S 0.0 0.9 0:00.77 postgres:\ndeploy mtalcott 10.222.155.179(60757) idle\n17680 postgres 20 0 2037m 37m 37m S 0.0 0.6 0:03.34\n/usr/local/pgsql9.3/bin/postgres -D /db/pgsql/9.3/data\n17698 postgres 20 0 2038m 36m 35m S 0.0 0.5 0:02.85 postgres:\nwriter process\n10993 postgres 20 0 2045m 29m 22m S 0.0 0.4 0:00.26 postgres:\ndeploy mtalcott 10.222.155.164(42908) idle\n17701 postgres 20 0 134m 21m 272 S 0.0 0.3 1:21.61 postgres:\nstats collector process\n 4905 postgres 20 0 2045m 13m 8408 S 0.0 0.2 0:00.44 postgres:\ndeploy mtalcott 10.222.155.164(47193) idle\n 5041 postgres 20 0 2044m 13m 8124 S 0.0 0.2 0:00.54 postgres:\ndeploy mtalcott 10.222.155.164(49813) idle\n 5036 postgres 20 0 2044m 12m 7808 S 0.0 0.2 0:00.50 postgres:\ndeploy mtalcott 10.222.155.164(49380) idle\n 6452 postgres 20 0 2044m 10m 6112 S 0.0 0.2 0:00.26 postgres:\ndeploy mtalcott 10.222.155.164(44313) idle\n 5023 postgres 20 0 2044m 10m 5868 S 0.0 0.2 0:00.50 postgres:\ndeploy mtalcott 10.222.155.164(47882) idle\n 5029 postgres 20 0 2045m 10m 6732 S 0.0 0.1 0:00.81 postgres:\ndeploy mtalcott 10.222.155.164(48498) idle\n 5808 postgres 20 0 2044m 9408 7040 S 0.0 0.1 0:00.30 postgres:\ndeploy mtalcott 10.222.155.164(33987) idle\n17700 postgres 20 0 2039m 4728 4432 S 0.0 0.1 0:00.71 postgres:\nautovacuum launcher process\n10567 deploy 20 0 97820 1372 432 S 0.0 0.0 0:00.02 sshd: deploy@pts/2\n10564 root 20 0 97820 1192 284 S 0.0 0.0 0:00.04 sshd: deploy [priv]\n10998 deploy 20 0 15168 1044 604 R 0.7 0.0 0:00.59 top -c\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Dec 2013 00:23:29 -0800", "msg_from": "Mack Talcott <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Debugging shared memory issues on CentOS" }, { "msg_contents": "Mack Talcott <[email protected]> writes:\n> The pattern I am seeing is that postgres processes keep growing in\n> shared (this makes sense as they access more of the shared memory, as\n> you've pointed out) but also process-specific memory as they run more\n> queries. The largest ones are using around 300mb of process-specific\n> memory, even when they're idle and outside of any transactions.\n\nThere's quite a lot of stuff that a PG process will cache in local memory\nonce it's acquired the info, for example:\n- relcache (relation descriptors)\n- catcache (system catalog entries)\n- compiled trees for plpgsql functions\n\n300mb worth of that stuff seems on the high side, but perhaps you have\nlots and lots of tables, or lots and lots of functions?\n\nIf this is the explanation then flushing that info just results in\nrestarting from a cold-cache situation, which doesn't seem likely to\nbe a win. You're just going to be paying to read it in again.\n\n> As for CentOS using 1.5g for disk caching, I'm at a loss. I have\n> played with the 'swappiness', setting it down to 10 from the default\n> of 60 with sysctl. It didn't have any effect.\n\nSwappiness has nothing to do with disk cache. Disk cache just means that\nthe kernel is free to use any spare memory for copies of file pages it's\nread from disk lately. This is almost always a good thing, because it\nsaves reading those pages again if they're needed again. And the key word\nthere is \"spare\" --- the kernel is at liberty to drop those cached pages\nif it needs the memory for something more pressing. So there's really no\ndownside. Trying to reduce that number is completely counterproductive.\nRather, my observation was that if you had a gig and a half worth of RAM\nthat the kernel felt it could afford to use for disk caching, then you\nweren't having much of a memory problem. However, apparently that\nsnapshot wasn't representative of your problem case:\n\n> Once 70-80% of memory is reached, the machine starts using swap, and\n> it keeps growing. At first, queries become slightly slower. Then\n> some larger selects start taking 10, then 20, then 30 seconds. During\n> this, vmstat shows 5-20 procs waiting on both CPU and I/O.\n\nI wonder if the short answer for this isn't that you should be using fewer\nbackends by running a connection pooler. If the backends want to cache a\ncouple hundred meg worth of stuff, it's probably wise to let them do so.\nOr maybe you should just buy some more RAM. 8GB is pretty puny for a\nserver these days (heck, the obsolete laptop I'm typing this mail on\nhas half that much).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Dec 2013 22:39:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Debugging shared memory issues on CentOS" }, { "msg_contents": "> There's quite a lot of stuff that a PG process will cache in local memory\n> once it's acquired the info, for example:\n> - relcache (relation descriptors)\n> - catcache (system catalog entries)\n> - compiled trees for plpgsql functions\n>\n> 300mb worth of that stuff seems on the high side, but perhaps you have\n> lots and lots of tables, or lots and lots of functions?\n>\n> If this is the explanation then flushing that info just results in\n> restarting from a cold-cache situation, which doesn't seem likely to\n> be a win. You're just going to be paying to read it in again.\n\nIt does seem a bit on the high side, but that makes sense. There are\nabout 90 tables and 5 functions in each schema (all are identical),\nbut there are several infrequent queries for overall statistics that\ndo a union over all schemas (using UNION ALL). That seems like the\nmost likely culprit, as there are ~500 of these schemas.\n\nHowever, as the app serves a variety of customers, each request makes\nqueries in a different schema. Seems like eventually these caches\nwould get pretty large even without the all-schema queries.\n\n> Swappiness has nothing to do with disk cache. Disk cache just means that\n> the kernel is free to use any spare memory for copies of file pages it's\n> read from disk lately. This is almost always a good thing, because it\n> saves reading those pages again if they're needed again. And the key word\n> there is \"spare\" --- the kernel is at liberty to drop those cached pages\n> if it needs the memory for something more pressing. So there's really no\n> downside. Trying to reduce that number is completely counterproductive.\n> Rather, my observation was that if you had a gig and a half worth of RAM\n> that the kernel felt it could afford to use for disk caching, then you\n> weren't having much of a memory problem. However, apparently that\n> snapshot wasn't representative of your problem case:\n\nI see. So, maybe the kernel is _first_ determining that some of the\ninactive processes' memory should be swapped out. Then, since there\nis free memory, it's being used for disk cache?\n\n> I wonder if the short answer for this isn't that you should be using fewer\n> backends by running a connection pooler.\n\nIf I can figure out the maximum number of connections that my server\ncan handle, that's definitely a possibility.\n\n> If the backends want to cache a\n> couple hundred meg worth of stuff, it's probably wise to let them do so.\n> Or maybe you should just buy some more RAM. 8GB is pretty puny for a\n> server these days (heck, the obsolete laptop I'm typing this mail on\n> has half that much).\n\nMore memory is definitely a good solution. This server is on EC2, and\nI'm working on replacing it with an instance with twice as much.\nHowever, my concern is that if I double the number of app servers to\nhandle higher load, I will run into the same issue.\n\nI assume the memory of each process grows until it has all 90 tables\nfrom all 500 schemas cached in some way. Any ideas for optimizations\nthat would allow less memory usage in this case with many identical\nschemas? I'm guessing using views rather than select statements\nwouldn't help. Any postgres configs concerning caching I should take\na look at? Different approaches to data organization?\n\nThanks, Tom. I really appreciate your feedback!\n\n>\n> regards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Dec 2013 00:25:36 -0800", "msg_from": "Mack Talcott <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Debugging shared memory issues on CentOS" }, { "msg_contents": "On Wed, Dec 11, 2013 at 9:39 PM, Tom Lane <[email protected]> wrote:\n> Mack Talcott <[email protected]> writes:\n>> The pattern I am seeing is that postgres processes keep growing in\n>> shared (this makes sense as they access more of the shared memory, as\n>> you've pointed out) but also process-specific memory as they run more\n>> queries. The largest ones are using around 300mb of process-specific\n>> memory, even when they're idle and outside of any transactions.\n>\n> There's quite a lot of stuff that a PG process will cache in local memory\n> once it's acquired the info, for example:\n> - relcache (relation descriptors)\n> - catcache (system catalog entries)\n> - compiled trees for plpgsql functions\n>\n> 300mb worth of that stuff seems on the high side, but perhaps you have\n> lots and lots of tables, or lots and lots of functions?\n\nThis has got to be the problem. It's known that pathological\nworkloads (lots and lots of tables,views, and functions) abuse the\ncache memory segment. There's no cap to cache memory so over time it\nwill just accumulate entries until there's nothing left to cache. For\nmost applications, this doesn't even show up on the radar. However,\n300mb per postgres backend will burn through that 8gb pretty quickly.\nIt's tempting to say, \"there should be a limit to backend local cache\"\nbut it's not clear if the extra tracking is really worth it all things\nconsidered. There was some discussion about this (see the archives).\n\nWorkarounds:\n*) install connection pooler (as Tom noted), in particular pgbouncer.\n For workloads like this you will want to be spartan on the number of\nphysical connections -- say, 1 * number of cores. For this option to\nwork you need to use transaction mode which in turn limits use of\nsession dependent features (advisory locks, NOTIFY, prepared\nstatements). Also if your client stack is java you need to take some\nextra steps.\n*) add memory\n*) force connections to recycle every X period of time\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Dec 2013 08:56:59 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Debugging shared memory issues on CentOS" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> It's tempting to say, \"there should be a limit to backend local cache\"\n> but it's not clear if the extra tracking is really worth it all things\n> considered. There was some discussion about this (see the archives).\n\nYeah --- there actually was a limit on total catcache size once, long ago.\nWe took it out because it was (a) expensive to enforce and (b) either\npointless or counterproductive on most workloads. The catcache is\nprobably the least of the memory hogs anyway, so it might be that limiting\nthe size of relcache or function caches would be more useful. But that\nmemory is likely to discourage most hackers from investigating.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Dec 2013 10:17:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Debugging shared memory issues on CentOS" } ]
[ { "msg_contents": "Hello everyone,\n\nI'm looking for a way to specify join order in SQL query. Actually, the\noptimizer has chosen a plan with hash join of 2 tables, but I would like\nto know if there is a way to force it to use hash join, but with replaced\ntables on build phase and probe phase?\n\nThank you,\nMirko Spasic\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 04:45:59 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Hash join" } ]
[ { "msg_contents": "Hi\n\nI'm just trying about PostgreSQL, I create a database \"test\" with a table\n\"t1\":\n\ntest=> \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers\n---------+---------+-----------------------------------------------------\n col_id | integer | not null default nextval('t1_col_id_seq'::regclass)\n col_int | integer |\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (col_id)\n \"t1_col_int_idx\" btree (col_int)\n\nThere are 10001000 rows in that table, and basically col_int of each row\nare filled with random data within range [0,1024].\n\nOne strange thing I found is that:\n\ntest=> select distinct col_int from t1;\nTime: 1258.627 ms\ntest=> select distinct col_int from t1;\nTime: 1264.667 ms\ntest=> select distinct col_int from t1;\nTime: 1261.805 ms\n\nIf I use \"group by\":\n\ntest=> select distinct col_int from t1 group by col_int;\nTime: 1180.617 ms\ntest=> select distinct col_int from t1 group by col_int;\nTime: 1179.849 ms\ntest=> select distinct col_int from t1 group by col_int;\nTime: 1177.936 ms\n\nSo the performance difference is not very large.\nBut when I do that:\n\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 row)\n\nTime: 7367.476 ms\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 row)\n\nTime: 6946.233 ms\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 row)\n\nTime: 7386.969 ms\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 row)\n\n\nThe speed is straightly worse! But it's doesn't make sense. Since we can\njust make a temporary table or subquery and count for all rows. The\nperformance should be almost the same.\nSo do you have any idea about this? Or maybe PostgreSQL's query planner can\nbe improved for this kinds of query?\n\nBy the way, if I use a subquery to replace the above query, here's what I\ngot:\n\ntest=> select count(*) from (select distinct col_int from t1) as tmp;\n count\n-------\n 1025\n(1 row)\n\nTime: 1267.468 ms\ntest=> select count(*) from (select distinct col_int from t1) as tmp;\n count\n-------\n 1025\n(1 row)\n\nTime: 1257.327 ms\ntest=> select count(*) from (select distinct col_int from t1) as tmp;\n count\n-------\n 1025\n(1 row)\n\nTime: 1258.189 ms\n\n\nOK, this workaround works. But I just think if postgres can improve with\nthis kind of query, it will be better. Also I'm not sure about other\nsimilar scenarios that will cause similar problem.\n\nThe following is the output of \"explain analyze ...\":\ntest=> explain analyze select distinct col_int from t1;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=169268.05..169278.30 rows=1025 width=4) (actual\ntime=39034.653..39037.239 rows=1025 loops=1)\n -> Seq Scan on t1 (cost=0.00..144265.04 rows=10001204 width=4) (actual\ntime=0.041..19619.931 rows=10001000 loops=1)\n Total runtime: 39039.136 ms\n(3 rows)\n\nTime: 39103.622 ms\ntest=> explain analyze select distinct col_int from t1 group by col_int;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=169280.86..169291.11 rows=1025 width=4) (actual\ntime=39062.417..39064.882 rows=1025 loops=1)\n -> HashAggregate (cost=169268.05..169278.30 rows=1025 width=4) (actual\ntime=39058.136..39060.303 rows=1025 loops=1)\n -> Seq Scan on t1 (cost=0.00..144265.04 rows=10001204 width=4)\n(actual time=0.024..19439.482 rows=10001000 loops=1)\n Total runtime: 39066.896 ms\n(4 rows)\n\nTime: 39067.198 ms\ntest=> explain analyze select count(distinct col_int) from t1;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=169268.05..169268.06 rows=1 width=4) (actual\ntime=45994.120..45994.123 rows=1 loops=1)\n -> Seq Scan on t1 (cost=0.00..144265.04 rows=10001204 width=4) (actual\ntime=0.025..19599.950 rows=10001000 loops=1)\n Total runtime: 45994.154 ms\n(3 rows)\n\nTime: 45994.419 ms\n\ntest=> explain analyze select count(*) from (select distinct col_int from\nt1) as tmp;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=169291.11..169291.12 rows=1 width=0) (actual\ntime=39050.598..39050.600 rows=1 loops=1)\n -> HashAggregate (cost=169268.05..169278.30 rows=1025 width=4) (actual\ntime=39046.814..39048.742 rows=1025 loops=1)\n -> Seq Scan on t1 (cost=0.00..144265.04 rows=10001204 width=4)\n(actual time=0.035..19616.631 rows=10001000 loops=1)\n Total runtime: 39050.634 ms\n(4 rows)\n\nTime: 39050.896 ms\n\nP.S. I have already use \"analyze verbose t1;\" several times so the database\nshould already be optimized. But I'm just new to PostgreSQL.\n\nThe environment I use is:\nPostgreSQL 9.3.1 (postgresql-9.3 9.3.1-1.pgdg12.4+1) on local machine\n(actually a vbox VM, but when I try the test, I didn't run something very\ndifferent or some heavy program for all queries. And the result seems\nconsistent.)\nUbuntu 12.04 LTS\nI followed the installation step in Quickstart section of\nhttp://wiki.postgresql.org/wiki/Apt\n\nbest regards,\njacket41142\n\nHiI'm just trying about PostgreSQL, I create a database \"test\" with a table \"t1\":test=> \\d t1\n\n                            Table \"public.t1\" Column  |  Type   |                      Modifiers                      ---------+---------+----------------------------------------------------- col_id  | integer | not null default nextval('t1_col_id_seq'::regclass)\n\n col_int | integer | Indexes:    \"t1_pkey\" PRIMARY KEY, btree (col_id)    \"t1_col_int_idx\" btree (col_int)There are 10001000 rows in that table, and basically col_int of each row are filled with random data within range [0,1024].\nOne strange thing I found is that:test=> select distinct col_int from t1;Time: 1258.627 mstest=> select distinct col_int from t1;Time: 1264.667 mstest=> select distinct col_int from t1;\n\nTime: 1261.805 msIf I use \"group by\":test=> select distinct col_int from t1 group by col_int;Time: 1180.617 mstest=> select distinct col_int from t1 group by col_int;Time: 1179.849 ms\n\ntest=> select distinct col_int from t1 group by col_int;Time: 1177.936 msSo the performance difference is not very large.But when I do that:test=> select count(distinct col_int) from t1;\n\n count -------  1025(1 row)Time: 7367.476 mstest=> select count(distinct col_int) from t1; count -------  1025(1 row)Time: 6946.233 mstest=> select count(distinct col_int) from t1;\n\n count -------  1025(1 row)Time: 7386.969 mstest=> select count(distinct col_int) from t1; count -------  1025(1 row)The\n speed is straightly worse! But it's doesn't make sense. Since we can \njust make a temporary table or subquery and count for all rows. The \nperformance should be almost the same.\nSo do you have any idea about this? Or maybe PostgreSQL's query planner can be improved for this kinds of query?By the way, if I use a subquery to replace the above query, here's what I got:\ntest=> select count(*) from (select distinct col_int from t1) as tmp; count -------  1025(1 row)Time: 1267.468 mstest=> select count(*) from (select distinct col_int from t1) as tmp;\n\n count -------  1025(1 row)Time: 1257.327 mstest=> select count(*) from (select distinct col_int from t1) as tmp; count -------  1025(1 row)Time: 1258.189 ms\n\nOK, this workaround works. But I just think if postgres can improve with\n this kind of query, it will be better. Also I'm not sure about other \nsimilar scenarios that will cause similar problem.The following is the output of \"explain analyze ...\":\ntest=> explain analyze select distinct col_int from t1;                                                       QUERY PLAN                                                        -------------------------------------------------------------------------------------------------------------------------\n\n HashAggregate  (cost=169268.05..169278.30 rows=1025 width=4) (actual time=39034.653..39037.239 rows=1025 loops=1)   ->  Seq Scan on t1  (cost=0.00..144265.04 rows=10001204 width=4) (actual time=0.041..19619.931 rows=10001000 loops=1)\n\n Total runtime: 39039.136 ms(3 rows)Time: 39103.622 mstest=> explain analyze select distinct col_int from t1 group by col_int;                                                          QUERY PLAN                                                           \n\n------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=169280.86..169291.11 rows=1025 width=4) (actual time=39062.417..39064.882 rows=1025 loops=1)\n\n   ->  HashAggregate  (cost=169268.05..169278.30 rows=1025 width=4) (actual time=39058.136..39060.303 rows=1025 loops=1)        \n ->  Seq Scan on t1  (cost=0.00..144265.04 rows=10001204 width=4) \n(actual time=0.024..19439.482 rows=10001000 loops=1)\n Total runtime: 39066.896 ms(4 rows)Time: 39067.198 mstest=> explain analyze select count(distinct col_int) from t1;                                                       QUERY PLAN                                                        \n\n------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=169268.05..169268.06 rows=1 width=4) (actual time=45994.120..45994.123 rows=1 loops=1)\n\n   ->  Seq Scan on t1  (cost=0.00..144265.04 rows=10001204 width=4) (actual time=0.025..19599.950 rows=10001000 loops=1) Total runtime: 45994.154 ms(3 rows)Time: 45994.419 mstest=> explain analyze select count(*) from (select distinct col_int from t1) as tmp;\n\n                                                          QUERY PLAN                                                           -------------------------------------------------------------------------------------------------------------------------------\n\n Aggregate  (cost=169291.11..169291.12 rows=1 width=0) (actual time=39050.598..39050.600 rows=1 loops=1)   ->  HashAggregate  (cost=169268.05..169278.30 rows=1025 width=4) (actual time=39046.814..39048.742 rows=1025 loops=1)\n\n         ->  Seq Scan on t1  (cost=0.00..144265.04 rows=10001204 \nwidth=4) (actual time=0.035..19616.631 rows=10001000 loops=1) Total runtime: 39050.634 ms(4 rows)Time: 39050.896 msP.S.\n I have already use \"analyze verbose t1;\" several times so the database \nshould already be optimized. But I'm just new to PostgreSQL.\nThe environment I use is:PostgreSQL 9.3.1 \n(postgresql-9.3 9.3.1-1.pgdg12.4+1) on local machine (actually a vbox \nVM, but when I try the test, I didn't run something very different or \nsome heavy program for all queries. And the result seems consistent.)\nUbuntu 12.04 LTSI followed the installation step in Quickstart section of http://wiki.postgresql.org/wiki/Apt\n\nbest regards,jacket41142", "msg_date": "Wed, 11 Dec 2013 01:28:22 +0800", "msg_from": "jacket41142 <[email protected]>", "msg_from_op": true, "msg_subject": "select count(distinct ...) is slower than select distinct in about 5x" }, { "msg_contents": "jacket41142 <[email protected]> wrote:\n\n> [ subject issue in detail ]\n\nPlease review this thread:\n\nhttp://www.postgresql.org/message-id/flat/CAPNY-2Utce-c+kNTwsMCbAk58=9mYeAeViTXT9LO7r1k77jukw@mail.gmail.com#CAPNY-2Utce-c+kNTwsMCbAk58=9mYeAeViTXT9LO7r1k77jukw@mail.gmail.com\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 10:16:40 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(distinct ...) is slower than select distinct in\n about 5x" }, { "msg_contents": "On Tue, Dec 10, 2013 at 9:28 AM, jacket41142 <[email protected]> wrote:\n\n\n>\n> test=> select distinct col_int from t1 group by col_int;\n> Time: 1177.936 ms\n>\n> So the performance difference is not very large.\n> But when I do that:\n>\n> test=> select count(distinct col_int) from t1;\n> count\n> -------\n> 1025\n> (1 row)\n>\n> Time: 7367.476 ms\n>\n\n\ncount(distinct ...) always sorts, rather than using a hash, to do its work.\n I don't think that there is any fundamental reason that it could not be\nchanged to allow it to use hashing, it just hasn't been done yet. It is\ncomplicated by the fact that you can have multiple count() expressions in\nthe same query which demand sorting/grouping on different columns.\n\nCheers,\n\nJeff\n\nOn Tue, Dec 10, 2013 at 9:28 AM, jacket41142 <[email protected]> wrote:\n \n\n\ntest=> select distinct col_int from t1 group by col_int;Time: 1177.936 msSo the performance difference is not very large.But when I do that:test=> select count(distinct col_int) from t1;\n\n\n count -------  1025(1 row)Time: 7367.476 mscount(distinct ...) always sorts, rather than using a hash, to do its work.  I don't think that there is any fundamental reason that it could not be changed to allow it to use hashing, it just hasn't been done yet.  It is complicated by the fact that you can have multiple count() expressions in the same query which demand sorting/grouping on different columns.\nCheers,Jeff", "msg_date": "Tue, 10 Dec 2013 10:19:46 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(distinct ...) is slower than select\n distinct in about 5x" } ]
[ { "msg_contents": "Hi,\n\nmy sql is very simple,\nreturns one row,\nwhere conditions are assigned to primary keys\n\n\n*/select g.gd_index, gd.full_name/**/\n/**/from gd g join gd_data gd on (g.id_gd = gd.id_gd)/**/\n/**/where gd.id_gd_data = 1111 OR g.id_gd = 1111;/*\n\n\nbut generates \"crazy\" plan with Merge Join on big amount of rows (both tables contains 500000 rows)\nbecause Index scans ignore conditions, conditions are processed after index sacans on Merge Join\n\n*/Merge Join (cost=0.00..46399.80 rows=2 width=115) (actual time=3.881..644.409 rows=1 loops=1)/**/\n/**/ Merge Cond: (g.id_gd = gd.id_gd)/**/\n/**/ Join Filter: ((gd.id_gd_data = 1111) OR (g.id_gd = 1111))/**/\n/**/ -> Index Scan using pk_gd on gd g (cost=0.00..14117.79 rows=500001 width=40) (actual time=0.019..146.521 rows=500001 loops=1)/**/\n/**/ -> Index Scan using fki_gd on gd_data gd (cost=0.00..22282.04 rows=500001 width=99) (actual time=0.016..157.384 rows=500001 loops=1)/**/\n/**/Total runtime: 644.460 ms/*\n\n\nmodel is very simple\n\n\n/CREATE TABLE gd (//\n// id_gd bigint NOT NULL,//\n// gd_index character varying(60) NOT NULL,//\n// notes text,//\n// notes_exists integer NOT NULL DEFAULT 0,//\n// CONSTRAINT pk_gd PRIMARY KEY (id_gd )//\n//)//\n//\n//\n//CREATE TABLE gd_data (//\n// id_gd_data bigint NOT NULL,//\n// id_gd bigint NOT NULL,//\n// short_name character varying(120) NOT NULL,//\n// full_name character varying(512) NOT NULL,//\n// notes text,//\n// notes_exists integer NOT NULL DEFAULT 0,//\n// CONSTRAINT pk_gd_data PRIMARY KEY (id_gd_data ),//\n// CONSTRAINT fk_gd FOREIGN KEY (id_gd)//\n// REFERENCES gd (id_gd) MATCH SIMPLE//\n// ON UPDATE NO ACTION ON DELETE NO ACTION//\n//)//\n//\n//CREATE INDEX fki_gd//\n// ON gd_data//\n// USING btree//\n// (id_gd );//\n/\n\n\nmy configuration from (select * from pg_settings):\n\n\"server_version\";\"9.1.10\"\n\"block_size\";\"8192\"\n\"cpu_index_tuple_cost\";\"0.005\"\n\"cpu_operator_cost\";\"0.0025\"\n\"cpu_tuple_cost\";\"0.01\"\n\"cursor_tuple_fraction\";\"0.1\"\n\"default_statistics_target\";\"1000\"\n\"enable_bitmapscan\";\"on\"\n\"enable_hashagg\";\"on\"\n\"enable_hashjoin\";\"on\"\n\"enable_indexscan\";\"on\"\n\"enable_material\";\"on\"\n\"enable_mergejoin\";\"on\"\n\"enable_nestloop\";\"on\"\n\"enable_seqscan\";\"on\"\n\"enable_sort\";\"on\"\n\"enable_tidscan\";\"on\"\n\"maintenance_work_mem\";\"262144\"\n\"max_connections\";\"10\"\n\"max_files_per_process\";\"1000\"\n\"max_locks_per_transaction\";\"64\"\n\"max_pred_locks_per_transaction\";\"64\"\n\"max_prepared_transactions\";\"10\"\n\"random_page_cost\";\"1.5\"\n\"seq_page_cost\";\"1\"\n\"shared_buffers\";\"65536\"\n\"temp_buffers\";\"1024\"\n\"work_mem\";\"131072\"\n\n\n\n\n\nThank you for your help.\n \n\nKris Olszewski\n\n\n\n\n\n\n\n\nHi,\n\nmy sql is very simple, \nreturns one row,\nwhere conditions are assigned to primary keys \n\n\nselect g.gd_index, gd.full_name \nfrom gd g join gd_data gd on (g.id_gd = gd.id_gd)\nwhere gd.id_gd_data = 1111 OR g.id_gd = 1111;\n\n\nbut generates \"crazy\" plan with Merge Join on big amount of rows (both tables contains 500000 rows)\nbecause Index scans ignore conditions, conditions are processed after index sacans on Merge Join\n\nMerge Join (cost=0.00..46399.80 rows=2 width=115) (actual time=3.881..644.409 rows=1 loops=1)\n Merge Cond: (g.id_gd = gd.id_gd)\n Join Filter: ((gd.id_gd_data = 1111) OR (g.id_gd = 1111))\n -> Index Scan using pk_gd on gd g (cost=0.00..14117.79 rows=500001 width=40) (actual time=0.019..146.521 rows=500001 loops=1)\n -> Index Scan using fki_gd on gd_data gd (cost=0.00..22282.04 rows=500001 width=99) (actual time=0.016..157.384 rows=500001 loops=1)\nTotal runtime: 644.460 ms\n\n\nmodel is very simple\n\n\nCREATE TABLE gd (\n id_gd bigint NOT NULL,\n gd_index character varying(60) NOT NULL,\n notes text,\n notes_exists integer NOT NULL DEFAULT 0,\n CONSTRAINT pk_gd PRIMARY KEY (id_gd )\n)\n\n\nCREATE TABLE gd_data (\n id_gd_data bigint NOT NULL,\n id_gd bigint NOT NULL,\n short_name character varying(120) NOT NULL,\n full_name character varying(512) NOT NULL,\n notes text,\n notes_exists integer NOT NULL DEFAULT 0,\n CONSTRAINT pk_gd_data PRIMARY KEY (id_gd_data ),\n CONSTRAINT fk_gd FOREIGN KEY (id_gd)\n REFERENCES gd (id_gd) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n\nCREATE INDEX fki_gd\n ON gd_data\n USING btree\n (id_gd );\n\n\n\nmy configuration from (select * from pg_settings):\n\n\"server_version\";\"9.1.10\"\n\"block_size\";\"8192\"\n\"cpu_index_tuple_cost\";\"0.005\"\n\"cpu_operator_cost\";\"0.0025\"\n\"cpu_tuple_cost\";\"0.01\"\n\"cursor_tuple_fraction\";\"0.1\"\n\"default_statistics_target\";\"1000\"\n\"enable_bitmapscan\";\"on\"\n\"enable_hashagg\";\"on\"\n\"enable_hashjoin\";\"on\"\n\"enable_indexscan\";\"on\"\n\"enable_material\";\"on\"\n\"enable_mergejoin\";\"on\"\n\"enable_nestloop\";\"on\"\n\"enable_seqscan\";\"on\"\n\"enable_sort\";\"on\"\n\"enable_tidscan\";\"on\"\n\"maintenance_work_mem\";\"262144\"\n\"max_connections\";\"10\"\n\"max_files_per_process\";\"1000\"\n\"max_locks_per_transaction\";\"64\"\n\"max_pred_locks_per_transaction\";\"64\"\n\"max_prepared_transactions\";\"10\"\n\"random_page_cost\";\"1.5\"\n\"seq_page_cost\";\"1\"\n\"shared_buffers\";\"65536\"\n\"temp_buffers\";\"1024\"\n\"work_mem\";\"131072\"\n\n\n\n\n\nThank you for your help.\n \n\nKris Olszewski", "msg_date": "Wed, 11 Dec 2013 00:30:48 +0100", "msg_from": "Krzysztof Olszewski <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with slow query with WHERE conditions with OR clause on\n primary keys" }, { "msg_contents": "Krzysztof Olszewski wrote\n> Hi,\n> \n> my sql is very simple,\n> returns one row,\n> where conditions are assigned to primary keys\n> \n> \n> */select g.gd_index, gd.full_name/**/\n> /**/from gd g join gd_data gd on (g.id_gd = gd.id_gd)/**/\n> /**/where gd.id_gd_data = 1111 OR g.id_gd = 1111;/*\n> \n> \n> but generates \"crazy\" plan with Merge Join on big amount of rows (both\n> tables contains 500000 rows)\n> because Index scans ignore conditions, conditions are processed after\n> index sacans on Merge Join\n> \n> */Merge Join (cost=0.00..46399.80 rows=2 width=115) (actual\n> time=3.881..644.409 rows=1 loops=1)/**/\n> /**/ Merge Cond: (g.id_gd = gd.id_gd)/**/\n> /**/ Join Filter: ((gd.id_gd_data = 1111) OR (g.id_gd = 1111))/**/\n> /**/ -> Index Scan using pk_gd on gd g (cost=0.00..14117.79\n> rows=500001 width=40) (actual time=0.019..146.521 rows=500001 loops=1)/**/\n> /**/ -> Index Scan using fki_gd on gd_data gd (cost=0.00..22282.04\n> rows=500001 width=99) (actual time=0.016..157.384 rows=500001 loops=1)/**/\n> /**/Total runtime: 644.460 ms/*\n> \n> \n> model is very simple\n> \n> \n> /CREATE TABLE gd (//\n> // id_gd bigint NOT NULL,//\n> // gd_index character varying(60) NOT NULL,//\n> // notes text,//\n> // notes_exists integer NOT NULL DEFAULT 0,//\n> // CONSTRAINT pk_gd PRIMARY KEY (id_gd )//\n> //)//\n> //\n> //\n> //CREATE TABLE gd_data (//\n> // id_gd_data bigint NOT NULL,//\n> // id_gd bigint NOT NULL,//\n> // short_name character varying(120) NOT NULL,//\n> // full_name character varying(512) NOT NULL,//\n> // notes text,//\n> // notes_exists integer NOT NULL DEFAULT 0,//\n> // CONSTRAINT pk_gd_data PRIMARY KEY (id_gd_data ),//\n> // CONSTRAINT fk_gd FOREIGN KEY (id_gd)//\n> // REFERENCES gd (id_gd) MATCH SIMPLE//\n> // ON UPDATE NO ACTION ON DELETE NO ACTION//\n> //)//\n> //\n> //CREATE INDEX fki_gd//\n> // ON gd_data//\n> // USING btree//\n> // (id_gd );//\n> /\n> \n> \n> my configuration from (select * from pg_settings):\n> \n> \"server_version\";\"9.1.10\"\n> \n> Thank you for your help.\n> \n> \n> Kris Olszewski\n\nIt cannot do any better since it cannot pre-filter either table using the\nwhere condition without risking removing rows that would meet the other\ntable's condition post-join.\n\nThe query you are executing makes no sense to me: I don't understand why you\nwould ever filter on gd.id_gd_data given the model you are showing.\n\nI believe your understanding of your model - or the model itself - is flawed\nbut as you have only provided code it is impossible to pinpoint where\nexactly the disconnect resides. You can either fix the model or the query -\nthe later by implementing sub-selects with where clauses manually - which\nthen encodes an assumption about your data that the current query cannot\nmake.\n\nYour model implies that a single gd record can have multiple gd_data records\nassociated with it.\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Problem-with-slow-query-with-WHERE-conditions-with-OR-clause-on-primary-keys-tp5782803p5782822.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 10 Dec 2013 23:18:38 -0800 (PST)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with slow query with WHERE conditions with OR clause on\n primary keys" }, { "msg_contents": "Thanx for your answer\n\nMy example is trivial because i want to show strange (for me) postgres\nbehavior with dealing with primary keys (extreme example), in real situation\nuser put search condition e.g. \"Panas\" and this generates query\n...\nwhere gd.other_code like 'Panas%' OR g.code like 'Panas%'\n..\n\nboth columns has very good indexes and selectivity for \"like 'Panas%'\" ...\n\nI have experience from Oracle with this type of queries, and Oracle have no\nproblem with it,\nexecutes select on index on other_code from gd and join g\nin next step executes select on index on code from g and join gd\nand this two results are connected in last step (like union)\nvery fast on minimal cost\n\nand in my opinion read whole huge tables only for 10 rows in result where\nconditions are very good ... is strange\n\n\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Problem-with-slow-query-with-WHERE-conditions-with-OR-clause-on-primary-keys-tp5782803p5783927.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 18 Dec 2013 11:23:21 -0800 (PST)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with slow query with WHERE conditions with OR clause on\n primary keys" }, { "msg_contents": "[email protected] wrote\n> Thanx for your answer\n> \n> My example is trivial because i want to show strange (for me) postgres\n> behavior with dealing with primary keys (extreme example), in real\n> situation user put search condition e.g. \"Panas\" and this generates query\n> ...\n> where gd.other_code like 'Panas%' OR g.code like 'Panas%'\n> ..\n> \n> both columns has very good indexes and selectivity for \"like 'Panas%'\" ...\n> \n> I have experience from Oracle with this type of queries, and Oracle have\n> no problem with it,\n> executes select on index on other_code from gd and join g\n> in next step executes select on index on code from g and join gd\n> and this two results are connected in last step (like union)\n> very fast on minimal cost\n> \n> and in my opinion read whole huge tables only for 10 rows in result where\n> conditions are very good ... is strange\n\nI suppose the equivalent query that you'd want would be:\n\nSELECT ... FROM gd JOIN gd_data USING (id_gd)\nWHERE id_gd IN (\n\nSELECT id_gd FROM gd WHERE ...\nUNION ALL -distinct not required in this situation\nSELECT id_gd FROM gd_data WHERE ...\n\n) --ignoring NULL implications\n\nIt does make sense conceptually...\n\nDavid J.\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Problem-with-slow-query-with-WHERE-conditions-with-OR-clause-on-primary-keys-tp5782803p5783942.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 18 Dec 2013 12:12:29 -0800 (PST)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with slow query with WHERE conditions with OR clause on\n primary keys" }, { "msg_contents": "On 12/11/2013 12:30 AM, Krzysztof Olszewski wrote:\n> select g.gd_index, gd.full_name\n> from gd g join gd_data gd on (g.id_gd = gd.id_gd)\n> where gd.id_gd_data = 1111 OR g.id_gd = 1111;\n\nHave you tried writing the query to filter on gd.id_gd rather than \ng.id_gd? I am not sure if the query planner will realize that it can \nreplace g.id_gd with gd.id_gd in the where clause.\n\nselect g.gd_index, gd.full_name\nfrom gd g join gd_data gd on (g.id_gd = gd.id_gd)\nwhere gd.id_gd_data = 1111 OR gd.id_gd = 1111;\n\n-- \nAndreas Karlsson\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 02:32:38 +0100", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with slow query with WHERE conditions with\n OR clause on primary keys" }, { "msg_contents": "2013/12/18 [email protected] <[email protected]>\n\n> Thanx for your answer\n>\n> My example is trivial because i want to show strange (for me) postgres\n> behavior with dealing with primary keys (extreme example), in real\n> situation\n> user put search condition e.g. \"Panas\" and this generates query\n> ...\n> where gd.other_code like 'Panas%' OR g.code like 'Panas%'\n> ..\n>\n> both columns has very good indexes and selectivity for \"like 'Panas%'\" ...\n>\n> I have experience from Oracle with this type of queries, and Oracle have no\n> problem with it,\n> executes select on index on other_code from gd and join g\n> in next step executes select on index on code from g and join gd\n> and this two results are connected in last step (like union)\n> very fast on minimal cost\n>\n> and in my opinion read whole huge tables only for 10 rows in result where\n> conditions are very good ... is strange\n>\n>\nMaybe index is not in good form\n\ntry to build index with varchar_pattern_ops flag\n\nhttp://postgres.cz/wiki/PostgreSQL_SQL_Tricks_I#LIKE_optimalization\n\nCREATE INDEX like_index ON people(surname varchar_pattern_ops);\n\nRegards\n\nPavel Stehule\n\n\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Problem-with-slow-query-with-WHERE-conditions-with-OR-clause-on-primary-keys-tp5782803p5783927.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2013/12/18 [email protected] <[email protected]>\nThanx for your answer\n\nMy example is trivial because i want to show strange (for me) postgres\nbehavior with dealing with primary keys (extreme example), in real situation\nuser put search condition e.g.  \"Panas\" and this generates query\n...\nwhere gd.other_code like 'Panas%' OR g.code like 'Panas%'\n..\n\nboth columns has very good indexes and selectivity for \"like 'Panas%'\" ...\n\nI have experience from Oracle with this type of queries, and Oracle have no\nproblem with it,\nexecutes select on index on other_code from gd and join g\nin next step executes select on index on code from g and join gd\nand this two results are connected in last step (like union)\nvery fast on minimal cost\n\nand in my opinion read whole huge tables only for 10 rows in result where\nconditions are very good  ... is strange\nMaybe index is not in good formtry to build index with varchar_pattern_ops flaghttp://postgres.cz/wiki/PostgreSQL_SQL_Tricks_I#LIKE_optimalization\nCREATE INDEX like_index ON people(surname varchar_pattern_ops);RegardsPavel Stehule \n\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Problem-with-slow-query-with-WHERE-conditions-with-OR-clause-on-primary-keys-tp5782803p5783927.html\n\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 19 Dec 2013 18:29:25 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Problem with slow query with WHERE conditions with\n OR clause on primary keys" } ]
[ { "msg_contents": "Thanks very much.\n\nI think another problem is that the cost estimation isn't good enough to\nreflex real cost. Since we can see, from \"explain analyze ...\",\ncount(distinct ...) has smallest cost between the others, but since it uses\nsorts, the time complexity should be higher especially for large amount of\nrows.\n\nAlso I think even if we can have multiple count() expressions, the\noptimizer should also be able to choose between use sort, HashAggregate or\nmaybe something like linear aggregate if sorts are not needed or other\nmethods if exist. Also this may be done as just one job for entire table of\ninterested columns, or for each column separately.\n\nregards,\njacket41142\n\n\n2013/12/11 Jeff Janes <[email protected]>\n\n> On Tue, Dec 10, 2013 at 9:28 AM, jacket41142 <[email protected]>wrote:\n>\n>\n>>\n>> test=> select distinct col_int from t1 group by col_int;\n>> Time: 1177.936 ms\n>>\n>> So the performance difference is not very large.\n>> But when I do that:\n>>\n>> test=> select count(distinct col_int) from t1;\n>> count\n>> -------\n>> 1025\n>> (1 row)\n>>\n>> Time: 7367.476 ms\n>>\n>\n>\n> count(distinct ...) always sorts, rather than using a hash, to do its\n> work. I don't think that there is any fundamental reason that it could not\n> be changed to allow it to use hashing, it just hasn't been done yet. It is\n> complicated by the fact that you can have multiple count() expressions in\n> the same query which demand sorting/grouping on different columns.\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks very much.I think another problem is that the cost estimation isn't good enough to reflex real cost. Since we can see, from \"explain analyze ...\", count(distinct ...) has smallest cost between the others, but since it uses sorts, the time complexity should be higher especially for large amount of rows.\nAlso I think even if we can have multiple count() expressions, the optimizer should also be able to choose between use sort, HashAggregate or maybe something like linear aggregate if sorts are not needed or other methods if exist. Also this may be done as just one job for entire table of interested columns, or for each column separately.\nregards,jacket411422013/12/11 Jeff Janes <[email protected]>\nOn Tue, Dec 10, 2013 at 9:28 AM, jacket41142 <[email protected]> wrote:\n \n\n\ntest=> select distinct col_int from t1 group by col_int;Time: 1177.936 msSo the performance difference is not very large.But when I do that:test=> select count(distinct col_int) from t1;\n\n\n\n count -------  1025(1 row)Time: 7367.476 mscount(distinct ...) always sorts, rather than using a hash, to do its work.  I don't think that there is any fundamental reason that it could not be changed to allow it to use hashing, it just hasn't been done yet.  It is complicated by the fact that you can have multiple count() expressions in the same query which demand sorting/grouping on different columns.\nCheers,Jeff", "msg_date": "Wed, 11 Dec 2013 09:23:03 +0800", "msg_from": "jacket41142 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(distinct ...) is slower than select\n distinct in about 5x" }, { "msg_contents": "I have done another experiment for count(*) vs count(distinct ...), on same\ntable schema but with 10000000 rows now. And for this time, the postgres\nversion is 9.3.2 (9.3.2-1.pgdg12.4+1).\nThese two has same resulted query plan with same estimated cost, but\ncount(*) is straightly fast.\n\ntest=> explain analyze select count(*) from t1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=169247.92..169247.93 rows=1 width=0) (actual\ntime=37775.187..37775.188 rows=1 loops=1)\n -> Seq Scan on t1 (cost=0.00..144247.94 rows=9999994 width=0) (actual\ntime=0.037..19303.022 rows=10000000 loops=1)\n Total runtime: 37775.216 ms\n(3 筆資料列)\n\n時間: 37775.493 ms\ntest=> explain analyze select count(distinct col_int) from t1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=169247.92..169247.93 rows=1 width=4) (actual\ntime=45883.192..45883.195 rows=1 loops=1)\n -> Seq Scan on t1 (cost=0.00..144247.94 rows=9999994 width=4) (actual\ntime=0.037..19652.540 rows=10000000 loops=1)\n Total runtime: 45883.224 ms\n(3 筆資料列)\n\n時間: 45883.473 ms\n\n\n\ntest=> select count(*) from t1;\n count\n----------\n 10000000\n(1 筆資料列)\n\n時間: 602.018 ms\ntest=> select count(*) from t1;\n count\n----------\n 10000000\n(1 筆資料列)\n\n時間: 598.291 ms\ntest=> select count(*) from t1;\n count\n----------\n 10000000\n(1 筆資料列)\n\n時間: 592.439 ms\n\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 筆資料列)\n\n時間: 10311.788 ms\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 筆資料列)\n\n時間: 7063.156 ms\ntest=> select count(distinct col_int) from t1;\n count\n-------\n 1025\n(1 筆資料列)\n\n時間: 6899.283 ms\n\n\nI don't think count(*) also uses sort since it should not be needed. But\nfor the query planner, it seems it can not distinguish between these two\nnow.\n\nregards,\njacket41142\n\n\n\n2013/12/11 jacket41142 <[email protected]>\n\n> Thanks very much.\n>\n> I think another problem is that the cost estimation isn't good enough to\n> reflex real cost. Since we can see, from \"explain analyze ...\",\n> count(distinct ...) has smallest cost between the others, but since it uses\n> sorts, the time complexity should be higher especially for large amount of\n> rows.\n>\n> Also I think even if we can have multiple count() expressions, the\n> optimizer should also be able to choose between use sort, HashAggregate or\n> maybe something like linear aggregate if sorts are not needed or other\n> methods if exist. Also this may be done as just one job for entire table of\n> interested columns, or for each column separately.\n>\n> regards,\n> jacket41142\n>\n>\n> 2013/12/11 Jeff Janes <[email protected]>\n>\n>> On Tue, Dec 10, 2013 at 9:28 AM, jacket41142 <[email protected]>wrote:\n>>\n>>\n>>>\n>>> test=> select distinct col_int from t1 group by col_int;\n>>> Time: 1177.936 ms\n>>>\n>>> So the performance difference is not very large.\n>>> But when I do that:\n>>>\n>>> test=> select count(distinct col_int) from t1;\n>>> count\n>>> -------\n>>> 1025\n>>> (1 row)\n>>>\n>>> Time: 7367.476 ms\n>>>\n>>\n>>\n>> count(distinct ...) always sorts, rather than using a hash, to do its\n>> work. I don't think that there is any fundamental reason that it could not\n>> be changed to allow it to use hashing, it just hasn't been done yet. It is\n>> complicated by the fact that you can have multiple count() expressions in\n>> the same query which demand sorting/grouping on different columns.\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>\n>\n\nI have done another experiment for count(*) vs count(distinct ...), on same table schema but with 10000000 rows now. And for this time, the postgres version is 9.3.2 (9.3.2-1.pgdg12.4+1).These two has same resulted query plan with same estimated cost, but count(*) is straightly fast.\ntest=> explain analyze select count(*) from t1;                                                       QUERY PLAN                                                       ------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=169247.92..169247.93 rows=1 width=0) (actual time=37775.187..37775.188 rows=1 loops=1)   ->  Seq Scan on t1  (cost=0.00..144247.94 rows=9999994 width=0) (actual time=0.037..19303.022 rows=10000000 loops=1)\n Total runtime: 37775.216 ms(3 筆資料列)時間: 37775.493 mstest=> explain analyze select count(distinct col_int) from t1;                                                       QUERY PLAN                                                       \n------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=169247.92..169247.93 rows=1 width=4) (actual time=45883.192..45883.195 rows=1 loops=1)\n   ->  Seq Scan on t1  (cost=0.00..144247.94 rows=9999994 width=4) (actual time=0.037..19652.540 rows=10000000 loops=1) Total runtime: 45883.224 ms(3 筆資料列)時間: 45883.473 mstest=> select count(*) from t1;\n  count   ---------- 10000000(1 筆資料列)時間: 602.018 mstest=> select count(*) from t1;  count   ---------- 10000000(1 筆資料列)時間: 598.291 mstest=> select count(*) from t1;\n  count   ---------- 10000000(1 筆資料列)時間: 592.439 mstest=> select count(distinct col_int) from t1; count -------  1025(1 筆資料列)時間: 10311.788 mstest=> select count(distinct col_int) from t1;\n count -------  1025(1 筆資料列)時間: 7063.156 mstest=> select count(distinct col_int) from t1; count -------  1025(1 筆資料列)時間: 6899.283 msI don't think count(*) also uses sort since it should not be needed. But for the query planner, it seems it can not distinguish between these two now.\nregards,jacket411422013/12/11 jacket41142 <[email protected]>\nThanks very much.I think another problem is that the cost estimation isn't good enough to reflex real cost. Since we can see, from \"explain analyze ...\", count(distinct ...) has smallest cost between the others, but since it uses sorts, the time complexity should be higher especially for large amount of rows.\nAlso I think even if we can have multiple count() expressions, the optimizer should also be able to choose between use sort, HashAggregate or maybe something like linear aggregate if sorts are not needed or other methods if exist. Also this may be done as just one job for entire table of interested columns, or for each column separately.\nregards,jacket411422013/12/11 Jeff Janes <[email protected]>\nOn Tue, Dec 10, 2013 at 9:28 AM, jacket41142 <[email protected]> wrote:\n \n\n\ntest=> select distinct col_int from t1 group by col_int;Time: 1177.936 msSo the performance difference is not very large.But when I do that:test=> select count(distinct col_int) from t1;\n\n\n\n\n count -------  1025(1 row)Time: 7367.476 mscount(distinct ...) always sorts, rather than using a hash, to do its work.  I don't think that there is any fundamental reason that it could not be changed to allow it to use hashing, it just hasn't been done yet.  It is complicated by the fact that you can have multiple count() expressions in the same query which demand sorting/grouping on different columns.\nCheers,Jeff", "msg_date": "Wed, 11 Dec 2013 09:57:15 +0800", "msg_from": "jacket41142 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(distinct ...) is slower than select\n distinct in about 5x" }, { "msg_contents": "On Tuesday, December 10, 2013, jacket41142 wrote:\n\n> Thanks very much.\n>\n> I think another problem is that the cost estimation isn't good enough to\n> reflex real cost. Since we can see, from \"explain analyze ...\",\n> count(distinct ...) has smallest cost between the others, but since it uses\n> sorts, the time complexity should be higher especially for large amount of\n> rows.\n>\n\nThat one is easy to explain. The cost estimate is not intended to be an\nabsolute estimate, it is just an relative estimate to choose between\nalternatives. Since the current implementation of count(distinct ...)\ndoes not *have* any alternatives for that step in the process, there is no\npoint in estimating a cost for it. So part of providing it with\nalternatives will have to be providing those cost estimates as well.\n\n\n> Also I think even if we can have multiple count() expressions, the\n> optimizer should also be able to choose between use sort, HashAggregate or\n> maybe something like linear aggregate if sorts are not needed or other\n> methods if exist. Also this may be done as just one job for entire table of\n> interested columns, or for each column separately.\n>\n\nRight. I hope this gets fixed. It's been on my todo list for a while, but\nat the current rate of going through my todo list, it will takes a few\ndecades to get to if it is left up to me....\n\nCheers,\n\nJeff\n\nOn Tuesday, December 10, 2013, jacket41142 wrote:Thanks very much.I think another problem is that the cost estimation isn't good enough to reflex real cost. Since we can see, from \"explain analyze ...\", count(distinct ...) has smallest cost between the others, but since it uses sorts, the time complexity should be higher especially for large amount of rows.\nThat one is easy to explain.  The cost estimate is not intended to be an absolute estimate, it is just an relative estimate to choose between alternatives.   Since the current implementation of count(distinct ...) does not *have* any alternatives for that step in the process, there is no point in estimating a cost for it.  So part of providing it with alternatives will have to be providing those cost estimates as well.\n Also I think even if we can have multiple count() expressions, the optimizer should also be able to choose between use sort, HashAggregate or maybe something like linear aggregate if sorts are not needed or other methods if exist. Also this may be done as just one job for entire table of interested columns, or for each column separately.\nRight.  I hope this gets fixed.  It's been on my todo list for a while, but at the current rate of going through my todo list, it will takes a few decades to get to if it is left up to me....\nCheers,Jeff", "msg_date": "Tue, 10 Dec 2013 18:24:57 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(distinct ...) is slower than select\n distinct in about 5x" }, { "msg_contents": "2013/12/11 Jeff Janes <[email protected]>\n\n> On Tuesday, December 10, 2013, jacket41142 wrote:\n>\n>> Thanks very much.\n>>\n>> I think another problem is that the cost estimation isn't good enough to\n>> reflex real cost. Since we can see, from \"explain analyze ...\",\n>> count(distinct ...) has smallest cost between the others, but since it uses\n>> sorts, the time complexity should be higher especially for large amount of\n>> rows.\n>>\n>\n> That one is easy to explain. The cost estimate is not intended to be an\n> absolute estimate, it is just an relative estimate to choose between\n> alternatives. Since the current implementation of count(distinct ...)\n> does not *have* any alternatives for that step in the process, there is no\n> point in estimating a cost for it. So part of providing it with\n> alternatives will have to be providing those cost estimates as well.\n>\n\nI got it. Thanks very much for explain.\n\n\n>\n>> Also I think even if we can have multiple count() expressions, the\n>> optimizer should also be able to choose between use sort, HashAggregate or\n>> maybe something like linear aggregate if sorts are not needed or other\n>> methods if exist. Also this may be done as just one job for entire table of\n>> interested columns, or for each column separately.\n>>\n>\n> Right. I hope this gets fixed. It's been on my todo list for a while,\n> but at the current rate of going through my todo list, it will takes a few\n> decades to get to if it is left up to me....\n>\n\nThanks very much for your effort. Also it's still good to know for me that\nthis problem will be fixed in future. :)\nAnd so until now, if someone want to use count(distinct ...), he can use a\nworkaround like subquery if performance is a concern. (Of course, he also\nneeds to take care about NULL values as mentioned in\nhttp://www.postgresql.org/message-id/flat/CAPNY-2Utce-c+kNTwsMCbAk58=9mYeAeViTXT9LO7r1k77jukw@mail.gmail.com#CAPNY-2Utce-c+kNTwsMCbAk58=9mYeAeViTXT9LO7r1k77jukw@mail.gmail.com\n)\n\n\nbest regards,\njacket41142\n\n2013/12/11 Jeff Janes <[email protected]>\nOn Tuesday, December 10, 2013, jacket41142 wrote:Thanks very much.\nI think another problem is that the cost estimation isn't good enough to reflex real cost. Since we can see, from \"explain analyze ...\", count(distinct ...) has smallest cost between the others, but since it uses sorts, the time complexity should be higher especially for large amount of rows.\nThat one is easy to explain.  The cost estimate is not intended to be an absolute estimate, it is just an relative estimate to choose between alternatives.   Since the current implementation of count(distinct ...) does not *have* any alternatives for that step in the process, there is no point in estimating a cost for it.  So part of providing it with alternatives will have to be providing those cost estimates as well.\nI got it. Thanks very much for explain.\n Also I think even if we can have multiple count() expressions, the optimizer should also be able to choose between use sort, HashAggregate or maybe something like linear aggregate if sorts are not needed or other methods if exist. Also this may be done as just one job for entire table of interested columns, or for each column separately.\nRight.  I hope this gets fixed.  It's been on my todo list for a while, but at the current rate of going through my todo list, it will takes a few decades to get to if it is left up to me....\nThanks very much for your effort. Also it's still good to know for me that this problem will be fixed in future. :)And so until now, if someone want to use count(distinct ...), he can use a workaround like subquery if performance is a concern. (Of course, he also needs to take care about NULL values as mentioned in http://www.postgresql.org/message-id/flat/CAPNY-2Utce-c+kNTwsMCbAk58=9mYeAeViTXT9LO7r1k77jukw@mail.gmail.com#CAPNY-2Utce-c+kNTwsMCbAk58=9mYeAeViTXT9LO7r1k77jukw@mail.gmail.com)\nbest regards,jacket41142", "msg_date": "Thu, 12 Dec 2013 11:22:37 +0800", "msg_from": "jacket41142 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(distinct ...) is slower than select\n distinct in about 5x" } ]